task_id
int64
0
200
task_name
stringlengths
11
34
task_description
stringlengths
605
7.73k
0
iclr2023_bands
# Backdoor Attacks and Defenses in Machine Learning ## Overview Backdoor attacks aim to cause consistent misclassification of any input by adding a specific pattern called a trigger. Unlike adversarial attacks requiring generating perturbations on the fly to induce misclassification for one single input, backdoor attacks have prompt effects by simply applying a pre-chosen trigger. Recent studies have shown the feasibility of launching backdoor attacks in various domains, such as computer vision (CV), natural language processing (NLP), federated learning (FL), etc. As backdoor attacks are mostly carried out through data poisoning (i.e., adding malicious inputs to training data), it raises major concerns for many publicly available pre-trained models. Companies relying on user data to construct their machine learning models are also susceptible to backdoor attacks. Defending against backdoor attacks has sparked multiple lines of research, including detecting inputs with backdoor triggers, determining whether a model has hidden backdoors, eliminating potential backdoors inside a model, etc. Many defense techniques are effective against some particular types of backdoor attacks. However, with increasingly emerging diverse backdoors, the defense performance of existing work tends to be limited. Most defense techniques and attacks are developed for the computer vision domain. It is yet to explore the connection between attacks and defenses among different domains. With the wide adoption of large pre-trained models in real-world applications, any injected malicious behaviors, such as backdoors in those models, are particularly concerning. It is, therefore, particularly important to gather researchers in the area and expand the community to improve the security of machine learning. This workshop aims to answer the following questions: - What other types of backdoor attacks can we find in CV/NLP/FL machine learning models? - Can we launch backdoor attacks in other domains, such as binary analysis tools, network intrusion detection systems, reinforcement learning, etc.? - What are the similarities and differences of backdoor attacks in various tasks? - How can we measure the stealthiness of backdoor attacks in different domains? What are the costs and practicality of launching backdoor attacks in the real world? - What is the performance of existing defense techniques in studied domains? Can they be adapted to other domains? - How can we develop a general defense method against a variety of backdoor attacks and even unseen attacks? - Are there other forms of defenses that are practical in the real world? ## Topics We invite submissions on any aspect of backdoor attacks and defenses in machine learning, which includes but is not limited to: - Novel backdoor attacks against ML systems, including CV, NLP, ML models in cyber-physical systems, etc. - Detecting backdoored models under different threat models, such as having limited clean data or no data, no access to model weights, using attack samples, etc. - Eliminating backdoors in attacked models under different settings, such as limited access or no access to the original training/test data - Certification/verification methods against backdoor attacks with guarantees - Real-world or physical backdoor attacks in deployed systems, such as autonomous driving systems, facial recognition systems, etc. - Hardware-based backdoor attacks in ML - Backdoors in distributed learning, federated learning, reinforcement learning, etc. - Theoretical understanding of backdoor attacks in machine learning - Explainable and interpretable AI in backdoor scenario - Futuristic concerns on trustworthiness and societal impact of ML systems regarding backdoor threats - Exploration of the relation among backdoors, adversarial robustness, fairness - New applications of backdoors in other scenarios, such as watermarking ML property, boosting privacy attacks, etc.
1
iclr2023_dg
# What do we need for successful domain generalization? ## Workshop Description The real challenge for any machine learning system is to be reliable and robust in any situation, even if it is different compared to training conditions. Existing general purpose approaches to domain generalization (DG) — a problem setting that challenges a model to generalize well to data outside the distribution sampled at training time — have failed to consistently outperform standard empirical risk minimization baselines. In this workshop, we aim to work towards answering a single question: what do we need for successful domain generalization? We conjecture that additional information of some form is required for a general purpose learning methods to be successful in the DG setting. The purpose of this workshop is to identify possible sources of such information, and demonstrate how these extra sources of data can be leveraged to construct models that are robust to distribution shift. Specific topics of interest include, but are not limited to: - Leveraging domain-level meta-data - Exploiting multiple modalities to achieve robustness to distribution shift - Frameworks for specifying known invariances/domain knowledge - Causal modeling and how it can be robust to distribution shift - Empirical analysis of existing domain generalization methods and their underlying assumptions - Theoretical investigations into the domain generalization problem and potential solutions
2
iclr2023_ml4materials
# Machine Learning for Materials ## Overview Many of the world's most crucial challenges, such as access to renewable energy, energy storage, or clean water, are currently fundamentally bottlenecked by materials challenges. The discovery of new materials drives the development of key technologies like solar cells, batteries, and catalysis. Machine learning has significantly impacted the modeling of drug-like molecules and proteins, including the discovery of new antibiotics and the accurate prediction of 3D protein structures. Geometric deep learning methods, in particular, have made tremendous progress in modeling atomic structures and are a promising direction for solving open problems in computational materials science. While there has been growing interest in materials discovery with machine learning, the specific modeling challenges posed by materials have been largely unknown to the broader community. In particular, compared with the domain of drug-like molecules and proteins, the modeling of materials has the two major challenges outlined below. First, materials-specific inductive biases are needed to develop successful ML models. For example, materials often don't have a handy representation, like 2D graphs for molecules or sequences for proteins. Moreover, most materials are found in the condensed phase. This means they need to be represented under periodic boundary conditions, introducing challenges to both representation learning and generative models. Second, there exists a broad range of interesting materials classes, such as inorganic crystals, polymers, catalytic surfaces, nanoporous materials, and more. Each class of materials demands a different approach to represent their structures and new tasks/data sets to enable rapid ML developments. This workshop aims at bringing together the community to discuss and tackle these two types of challenges. In session A, we will feature speakers to discuss the latest progress in developing ML models for materials focusing on algorithmic challenges, covering topics like geometric deep learning and generative models. In particular, what can we learn from the more developed field of ML for molecules and proteins, and where might challenges differ and opportunities for novel developments lie? In session B, we will feature speakers to discuss unique challenges for each sub-field of materials design and how to define meaningful tasks that are relevant to the domain, covering areas including inorganic materials, polymers, nanoporous materials, and catalysis. More specifically, what are the key materials design problems that ML can help tackle? ## Topics Example topics include (but not limited to): - Representation of materials - Generative models for materials - Unique challenges in modeling materials with machine learning - Physical inductive biases useful for machine learning models for materials - Benchmark datasets and tools - Machine learning potentials - Automated experimental synthesis and characterization - Integration of simulation and experimental data - Language models on scientific literature
3
iclr2023_mlgh
## Machine Learning & Global Health During the Covid-19 pandemic, in spite of the impressive advances in machine learning in recent decades, the successes of this field were modest at best. Much work remains, for both machine learning and global health researchers, to deliver true progress in global health. This workshop will start a lasting and consistent effort to close the gap between advances in machine learning, practitioners and policy makers working in public health globally. It will focus on difficult public health problems and relevant machine learning and statistical methods. We will use this opportunity to bring together researchers from different communities to share new ideas and past experiences. We will facilitate rapid communication of the latest methodological developments in machine learning to parties who are in positions to use them and establish feedback loops for assessing the applicability and relevance of methods that are available and gaps that exist. It will be a unique opportunity to challenge both research communities and demonstrate important, policy-relevant applications of sophisticated methods at one of the most prestigious annual machine learning conferences. ## Topics This will be the first ever machine learning conference workshop on the topic ``Machine Learning & Global Health’’, sponsored by the Machine Learning & Global Health Network. By showcasing key applied challenges, along with recent theoretical advances, we hope to foster connections and prompt fruitful discussion. We will invite researchers to submit extended abstracts for contributed talks and posters along the themes of: - What lessons can we learn from the COVID-19 pandemic? - What sorts of questions in global health can machine learning be useful for? What sorts of questions in global health is machine learning unlikely to be useful for? - The current limitations in the application of machine learning to solving global health problems and possible solutions to these limitations. - How can we leverage machine learning in order to: promote public health worldwide; be proactive against future pandemics; understand and address inequalities in health. - What types of data and data sharing practices would enable better machine learning and global health? This workshop will start a lasting and consistent effort to close the gap between advances in machine learning, practitioners and policy makers working in public health globally. It will focus on difficult public health problems and relevant machine learning and statistical methods, which includes but is not limited to: - Disease transmission models; - Multi-agent modelling; - Epidemiology and public health; - Semi-mechanistic modelling of infectious disease dynamics; and - Any work within the intersection of ML and global health
4
iclr2023_mrl
# Multimodal Representation Learning: Perks and Pitfalls ## About the workshop Following deep learning, multimodal machine learning has made steady progress, becoming ubiquitous in many domains. Learning representations from multiple modalities can be beneficial since different perceptual modalities can inform each other and ground abstract phenomena in a more robust, generalisable way. However, the complexity of different modalities can hinder the training process, requiring careful design of the model in order to learn meaningful representations. In light of these seemingly conflicting aspects of multimodal learning, we must improve our understanding of what makes each modality different, how they interact, and what are the desiderata of multimodal representations. With this workshop, we aim to bring the multimodal community together, promoting work on multimodal representation learning that provides systematic insights into the nature of the learned representations, as well as ways to improve and understand the training of multimodal models, both from a theoretical and empirical point of view. ## Topics We welcome submissions related to any aspects of multimodal representation learning, including but not limited to: - Properties of multimodal representations. - Insights on interactions across modalities. - Novel applications regarding the nature and number of modalities. In particular, we encourage submission that address the following questions: - **Representation:** How do we identify useful properties of multimodal representations? - What semantic information is encoded in the learned representations? - How does the geometry of the representation space affect the quality of the learned representations? - What properties are leveraged for downstream tasks? - **Training:** How can we promote useful properties of multimodal representations? - What are the limits of representation models, in regard to the number of modalities? - How do different learning objectives influence the resulting representations? - How do we promote the robustness of the representations to adversarial attacks, missing input modalities, and noise? - **Modalities:** What makes a modality different? How can we improve their interactions? - How can we quantify the (dis)similarity between modalities? - How do different modalities contribute to the semantics of the learned representations? - What are the representation benefits of having multimodal observations as opposed to just a single modality? The MRL workshop has the objective to bring together experts from the multimodal learning community in order to advance these fundamental questions and discuss the future of the field. We invite submissions that present analysis of the properties of multimodal representations, insights on interactions across modalities, as well as novel applications regarding the nature and number of modalities employed.
5
iclr2023_nf
## Neural Fields across Fields: Methods and Applications of Implicit Neural Representations Addressing problems in different science and engineering disciplines often requires solving optimization problems, including via machine learning from large training data. One class of methods has recently gained significant attention for problems in computer vision and visual computing: coordinate-based neural networks parameterizing a field, such as a neural network that maps a 3D spatial coordinate to a flow field in fluid dynamics, or a colour and density field in 3D scene representation. Such networks are often referred to as neural fields. The application of neural fields in visual computing has led to remarkable progress on various computer vision problems such as 3D scene reconstruction and generative modelling, leading to more accurate, higher fidelity, more expressive, and computationally cheaper solutions. The exciting progress has also led to the creation of a vibrant research community. Given that neural fields can represent spatio-temporal signals in arbitrary input/output dimensions, they are highly general as a tool to reason about real-world observations, be it common modalities in machine learning and vision such as image, 3D shapes, 3D scenes, video, speech/audio or more specialized modalities such as flow fields in physics, scenes in robotics, medical images in computational biology, weather data in climate science. However, though some adjacent fields such as robotics have recently seen an increased interest in this area, most of the current research is still confined to visual computing, and the application of neural fields in other fields is in its early stages. We thus propose a workshop with the following key goals: - Bring together researchers from a diverse set of backgrounds including machine learning, computer vision, robotics, applied mathematics, physics, chemistry, biology and climate science to exchange ideas and expand the domains of application of neural fields, including but not limited to vision: image/video/scene/3D geometry reconstruction, robotics: face/body/hand modelling, localization, planning, control, audio: audio/speech processing/generation, physics: solving PDEs, biology: protein structure reconstruction, medical imaging, climate science: weather/climate prediction, general: compression. - Highlight and discuss recent trends, advances and limitations of neural fields, both in terms of theory and methodology, including but not limited to: conditioning, optimization, meta-learning, representation of input space, architecture, generative modelling, spatial/temporal transformations, neural fields as data, sparsification. - Provide a forum for the ICLR community to get introduced to and discuss the exciting and growing area of neural fields, and also socialize with a diverse group of peers that have shared research interests. As prospective participants, we primarily target machine learning researchers interested in the questions and foci outlined above. Specific target communities within machine learning include, but are not limited to: robotics, visual computing, computational biology, computational cognitive science, deep learning, and optimization. ## Topics Key fundamental questions that we aim to address in this workshop are: - How could we encourage and facilitate exchange of ideas and collaboration across different research fields that can benefit from applying neural fields? - How can we improve the architectures, optimization and computation/memory efficiency of neural fields? - Which metrics and methods should we use to evaluate improvements to neural fields? For example, is reconstruction accuracy measured by PSNR sufficient, and if not, in which cases is it insufficient? - When should we avoid using neural fields? For example, does it make sense to use neural fields for discrete data such as text and graphs? - Which tasks can we tackle with neural fields that haven’t yet been explored? - What representation can we use for neural fields in order to extract high level information from them and solve downstream tasks? What novel architectures do we need to extract such information from these representations?
6
iclr2023_physics4ml
## Physics for Machine Learning Combining physics with machine learning is a rapidly growing field of research. Thus far, most of the work in this area focuses on leveraging recent advances in classical machine learning to solve problems that arise in the physical sciences. In this workshop, we wish to focus on a slightly less established topic, which is the converse: exploiting structures (or symmetries) of physical systems as well as insights developed in physics to construct novel machine learning methods and gain a better understanding of such methods. A particular focus will be on the synergy between the scientific problems and machine learning and incorporating structure of these problems into the machine learning methods which are used in that context. However, the scope of application of those models is not limited to problems in the physical sciences and can be applied even more broadly to standard machine learning problems, e.g. in computer vision, natural language processing or speech recognition. Examples that fall under the theme of leveraging physics for machine learning include methods that reason from first principles, embedding fundamental laws e.g. symmetries or conservation laws in machine learning systems. Some recent work on the topic include designing equivariant neural networks to handle non-trivial geometries, designing deep neural networks as Hamiltonian systems to improve trainability, expressivity but also generalization. Many of these methods can in turn be applied to physics themselves where many fundamental laws are known to hold, vastly improving particle physics models, or molecular and fluid dynamic simulations. Additional examples which are not restricted to problems in the physical sciences include recent state-of-the-art score-based SDE diffusion models for generative modeling using insights from molecular dynamics, (recurrent) sequence models based on Hamiltonian systems or multi-particle systems and graph neural networks based on coupled oscillators or gradient flows. The goal of this workshop is to encourage multi-disciplinary discussions and build bridges between researchers from diverse but complementary scientific backgrounds, i.e., researchers (from academia and industry) in pure machine learning as well as in the physical sciences, engineering, and applied mathematics. The workshop further aims to discuss the current state of the research field as well as possible solutions to pressing questions. The questions this workshop aims to discuss are: - Are there standard machine learning methods that can be interpreted and analyzed from a physics perspective? If so, what insights can we gain from that? - What type of structures and symmetries in physical systems have not yet been leveraged? - Are there applications of machine learning to specific types of problems in the physical sciences where only `brute-force’ approaches are applied and no structure of the problem is leveraged? If so, how can we change that? - Which established methods developed specifically for particular scientific applications may be of interest to the broader machine learning community, e.g. neural networks parameterized as Hamiltonian systems have favorable properties such as invertibility that could be leveraged for classical machine learning approaches? - For participants who want to focus on classical machine learning applications (i.e., no application in the physical sciences): What is a good approach to tackle problems in classical machine learning using structure from physical systems (a.k.a. a physicist’s perspective on problems in classical machine learning)? ## List of Topics We invite all submissions on using physics for machine learning methods. A list of exemplary topics can be found below. Please note that this list is non-exhaustive. If you are not sure if your topic is suitable for the workshop, please feel free to contact any of the organizers. - Physics-inspired machine learning; in particular for - Graph representation learning - Sequence modeling (e.g. Transformers, RNNs) - Generative modeling (e.g. diffusion models, score-based SDEs, normalizing flows) - Neural ODEs (e.g. NCDEs, CNFs) - Equivariant neural networks - Physics-based optimization - Machine learning methods with a physics-based inductive bias, for instance applied to - Molecular simulations - Fluid dynamics - Astrophysics - Particle physics - Multi-scale problems (e.g. in multi-physics) - Physics-based symbolic regression - Dynamical systems reconstruction with physics-based inductive bias
7
iclr2023_rrl
## Reincarnating RL This inaugural workshop at ICLR 2023 (in-person) aims to bring further attention to the emerging paradigm of reusing prior computation in RL, which we refer to as reincarnating RL. Specifically, we plan to discuss potential benefits of reincarnating RL, its current limitations and associated challenges, and come up with concrete problem statements and evaluation protocols for the research community to work on. Why? Reusing prior computation can further democratize RL research by allowing the broader community to tackle complex RL problems without requiring excessive computational resources. Furthermore, real-world RL use cases are common in scenarios where prior computational work is available, making reincarnating RL important to study. Additionally, reincarnating RL can enable a benchmarking paradigm where researchers continually improve and update existing trained agents, especially on problems where improving performance has real-world impact. However, except for some large-scale RL efforts with ad hoc approaches, the RL community has only recently started focusing on reincarnating RL as a research problem in its own right. ## Topics Learning “tabula rasa”, that is, from scratch without much previously learned knowledge, is the dominant paradigm in reinforcement learning (RL) research. While learning tabula rasa works well for small-scale research domains, it is the exception rather than the norm for solving larger-scale problems. Large-scale RL systems often undergo multiple design or algorithmic changes during their development cycle and use ad hoc approaches for incorporating these changes without retraining from scratch, which would have been prohibitively expensive. Additionally, the inefficiency of tabula rasa RL typically excludes the majority of the RL community outside certain resource-rich labs from tackling computationally demanding problems. To address these inefficiencies of tabula rasa RL, this workshop would focus on the alternative paradigm of leveraging prior computational work, referred to as reincarnating RL, to accelerate training across design iterations of an RL agent or when moving from one agent to another. Recently, the research community has started to focus on this emerging paradigm, by leveraging computational work in the form of learned network weights (for fine-tuning), learned policies, offline data, pretrained representations, LLMs, learned skills or dynamics models etc. Thus, it is evident that there is an interest in this important topic of leveraging prior computation in RL, to which our workshop can bring further attention. In particular, we are interested in bringing together researchers and practitioners to discuss questions on theoretical, empirical and practical aspects of reusing prior computation in RL, including but not limited to: - Developing methods for accelerating RL training depending on type or combination of prior computation available: - Learned policies - Offline datasets - Pretrained dynamics models - Foundation models or LLMs - Pretrained representations - Learned Skills - Challenges for dealing with suboptimality of prior computational work - Democratizing large-scale RL problems by releasing prior computation and formalizing the corresponding reincarnating RL setting. - Algorithmic decisions and challenges associated with suboptimality of prior computational work - Evaluation protocols, frameworks and standardized benchmarks for leveraging prior computation in RL research - Real-world / Large-scale applications of reincarnating RL - Properties of prior computational work needed to guarantee optimality of reincarnating RL methods - Connection to transfer learning, lifelong learning and data-driven simulation.
8
iclr2023_rtml
## Trustworthy and Reliable Large-Scale Machine Learning Models In recent years, the landscape of AI has been significantly altered by the advances in large-scale pre-trained models. Scaling up the models with more data and parameters has significantly improved performance and achieved great success in a variety of applications, from natural language understanding to multi-modal representation learning. However, when applying large-scale AI models to real-world applications, there have been concerns about their potential security, privacy, fairness, robustness, and ethics issues. In the wrong hands, machine learning could be used to negatively impact mission-critical domains, including healthcare, education, and law, resulting in economic and environmental consequences as well as legal and ethical concerns. For example, existing studies have shown that large-scale pre-trained language models contain toxicity in open-ended generation and have the risk of amplifying bias against marginalized groups, such as BIPOC and LGBTQ+. Moreover, large-scale models can unintentionally leak sensitive personal information during the pre-training stage. Last but not least, machine learning models are often viewed as "blackboxes" and may produce unpredictable, inaccurate, and unexplainable results, especially under domain shifts or maliciously tailored attacks. To address these negative societal impacts in large-scale models, researchers have investigated different approaches and principles to ensure robust and trustworthy large-scale AI systems. This workshop is the first attempt to bridge the gap between security, privacy, fairness, ethics, and large-scale AI models and aims to discuss the principles and experiences of developing robust and trustworthy large-scale AI systems. The workshop also focuses on how future researchers and practitioners should prepare themselves to reduce the risks of unintended behaviors of large ML models. ## Topics We invite submissions on any aspect of trustworthy and reliable ML, especially for large-scale models. Topics include but are not limited to: - Novel methods for building more trustworthy large-scale machine learning models that prevent or alleviate negative societal impacts of existing ML methods - New applications and settings where the robustness and trustworthiness of machine learning play an important role and how well existing techniques work under these settings - Machine learning models with verifiable guarantees (such as robustness, fairness, and privacy guarantees) to build trustworthiness - Privacy-preserving machine learning approaches for large-scale machine learning models - Theoretical understanding of trustworthy machine learning - Explainable and interpretable methods for large-scale AI - Pre-training techniques to build more robust and trustworthy large-scale machine learning models - Efficient fine-tuning methods to alleviate the trustworthiness gap for large-scale pre-trained models - Machine unlearning to mitigate the privacy, toxicity, and bias issues within large-scale AI models - Robust decision-making under uncertainty - Futuristic concerns about trustworthy machine learning for foundation models - Game-theoretic analysis for socially responsible machine learning systems - Case studies and field research of the societal impacts of applying machine learning in mission-critical and human-centric tasks
9
iclr2023_snn
## Overview of Sparsity in Neural Networks Deep networks with billions of parameters trained on large datasets have achieved unprecedented success in various applications, ranging from medical diagnostics to urban planning and autonomous driving, to name a few. However, training large models is contingent on exceptionally large and expensive computational resources. Such infrastructures consume substantial energy, produce a massive amount of carbon footprint, and often soon become obsolete and turn into e-waste. While there has been a persistent effort to improve the performance of machine learning models, their sustainability is often neglected. This realization has motivated the community to look closer at the sustainability and efficiency of machine learning, by identifying the most relevant model parameters or model structures. In this workshop, we examine the community’s progress toward these goals and aim to identify areas that call for additional research efforts. In particular, by bringing researchers with diverse backgrounds, we will focus on the limitations of existing methods for model compression and discuss the tradeoffs among model size and performance. ## Topics The following is a non-exhaustive list of questions we aim to address through our invited talks, panels, and accepted papers: - Where do we stand in evaluating and incorporating sustainability in machine learning? We make our models larger every day. Is this the right way to learn better? - Do we need better sparse training algorithms or better hardware support for the existing sparse training algorithms? - Hardware seems to be behind in supporting sparse training. What are the challenges of hardware design for sparse and efficient training? Are GPUs the answer or do we need new designs? - Our current theory can only analyze small neural networks. Can compression help us provide performance and reliability guarantees for learning? - What are the tradeoffs between sustainability, efficiency, and performance? Are these constraints competing against each other? If so, how can we find a balance? - Among different compression techniques, quantization has found more applications in industry. What is the current experience and challenges in deployment? - How effective sparsity could be in different domains, ranging from reinforcement learning to vision and robotics?
10
iclr2023_sr4ad
## Overview of Scene Representations for Autonomous Driving This workshop aims to promote the real-world impact of ML research toward self-driving technology. While ML-based components of modular stacks have been a huge success, there remains progress to be made in the development of integration strategies and intermediate representations. We invite contributions discussing the following topics, in order to empower the next generation of autonomous vehicles: - Representation learning for perception, prediction, planning, simulation, etc - Approaches that account for interactions between traditional sub-components (e.g., joint perception and prediction, end-to-end driving) - ML / statistical learning approaches to facilitate safety / interpretability / generalization - Driving environments / datasets for benchmarking ML algorithms - New perspectives on the future of autonomous driving
11
iclr2023_tml4h
## Trustworthy Machine Learning for Healthcare Workshop Machine learning (ML) has achieved or even exceeded human performance in many healthcare tasks, owing to the fast development of ML techniques and the growing scale of medical data. However, ML techniques are still far from being widely applied in practice. Real-world scenarios are far more complex, and ML is often faced with challenges in its trustworthiness such as lack of explainability, generalization, fairness, privacy, etc. Improving the credibility of machine learning is hence of great importance to enhance the trust and confidence of doctors and patients in using the related techniques. We aim to bring together researchers from interdisciplinary fields, including but not limited to machine learning, clinical research, and medical imaging, etc., to provide different perspectives on how to develop trustworthy ML algorithms to accelerate the landing of ML in healthcare. ## Scope and Topics Interested topics will include, but not be limited to: - Generalization to out-of-distribution samples. - Explainability of machine learning models in healthcare. - Reasoning, intervening, or causal inference. - Debiasing ML models from learning shortcuts. - Fair ML for healthcare. - Uncertainty estimation of ML models and medical data. - Privacy-preserving ML for medical data. - Learning informative and discriminative features under weak annotations. - Human-machine cooperation (human-in-the-loop, active learning, etc.) in healthcare, such as medical image analysis. - Multi-modal fusion and learning, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, pathology, genetics, electronical healthcare records, etc. - Benchmarks that quantify the trustworthiness of ML models in medical imaging tasks.
12
iclr2023_trustml
## Pitfalls of limited data and computation for Trustworthy ML Due to the impressive performance of ML algorithms, they are increasingly used in a wide range of applications that impact our daily lives. These include sensitive domains like healthcare, banking, social services, autonomous transportation, social media, advertisement, etc. However, ML algorithms that are deployed in the real world are restricted by a multitude of computational and statistical limitations. Often ignored in the ML research pipeline, these restrictions include - **Statistical limitations:** lack of available data, limited availability of high-quality labelled data, and lack of data from different domains of interest - **Computational limitations:** lack of high-speed hardware, lack of high memory hardware, extreme constraints on the computation time of ML algorithms during training or inference, and lack of hardware (e.g. hardware that cannot exploit sparsity) that is suitable for specific kinds of computations It is necessary to understand the impact of such limitations on the performance of ML algorithms. As these algorithms are increasingly used for high-stakes decision-making in socially impactful domains, their trustworthiness is becoming an increasingly relevant design factor to consider. In recent years, several issues with the trustworthiness of ML algorithms have been identified: - **Privacy:** Leaking private information about the training data. - **Fairness:** Incurring disparate impact on sensitive subpopulations. - **Miscalibration:** Giving a false sense of reliability through miscalibrated predictions. - **Reproducibility:** Inconsistency across multiple runs of the ML pipeline. - **Distribution shift:** Sensitivity to natural and adversarial test distribution shifts. - **Robustness:** Vulnerability to noise in the training data. - **Safety and Reliability:** Causing issues in the safety of resulting applications. - **Explainability and Interpretability:** Identifying factors leading to predictions. - **Auditing and Certifying ML systems:** Challenges of audit and certification under limited data and compute. In this workshop, we want to invite theoretical and empirical researchers to come together and discuss barriers to trustworthy ML and algorithms that overcome them. To enable this, we will solicit submissions that address questions such as (but not limited to) the following: - How does having less data or poor-quality data affect the trustworthiness of ML algorithms? Can these problems be mitigated with new algorithmic techniques (e.g. SSL, new DNN models, active learning)? - How do computational limitations impact the trustworthiness of ML algorithms? What are some natural statistical tasks that exhibit fundamental trade-offs between computational efficiency (runtime, memory, etc.) and trustworthiness (fairness, privacy, robustness)? Are these trade-offs also observed in practice? - Do these limitations result in trade-offs between different aspects of trustworthiness? If yes, how can they be averted with relaxations or new algorithmic techniques?
13
iclr2023_tsrl4h
## Workshop on Time Series Representation Learning for Health Time series data have been used in many applications in healthcare, such as the diagnosis of a disease, prediction of disease progression, clustering of patient groups, online monitoring, and dynamic treatment regimes, to name a few. More and more methods build on representation learning to tackle these problems by first learning a (typically low-dimensional) representation of the time series and then use the learned representation for the corresponding downstream task. Machine learning (ML) provides a powerful set of tools for time series data; however, its applicability in healthcare is still limited. As a result, the potential of time series analysis has yet to be fully realized. Our workshop on 'Time Series Representation Learning for Health' aims at bringing together the community to discuss cutting-edge research in this area, with a focus on the following themes: - Labeling, in general and in particular of long-term recordings, is a nontrivial task requiring appropriate experts like clinicians who are restricted in their time - Time series data acquired within real-life settings and novel measurement modalities are recorded without supervision, having no labels at all - The high dimensionality of data from multimodal sources - Missing values or outliers within acquired data or irregularity of measured data This workshop focuses on these aspects and the potential benefits of integrating representation learning in time series applications. Our goal is to encourage a discussion around developing new ideas towards representation learning complemented with robust, interpretable, and explainable approaches which can provide a medical expert with more information than just a prediction result. To make time series representation learning research actionable in clinical practice, we especially encourage discussions from application areas that tackle minority data groups and, thus, have their own unique challenges; for example, pediatrics, critical care (ICU), rare diseases like Alzheimer, HIV, fertility, and others. ## Topics We solicit original paper submissions advancing research in representations learning with time series data, with a focus on healthcare applications. Under this premise, we encourage submissions touching topics such as: - Robustness - Explainable and interpretable methods - Causality - Fairness - Challenges of addressing time series data, such as - labeling of real-world data, - long-term recordings, - handling high-dimensionality of data from multimodal sources, - dealing with missing values and outliers in data or irregularlity of measured data - Presenting novel open-access datasets Finally, we encourage work that is actionable in clinical practice, especially targeting application areas that tackle minority data groups and, thus, have their own specific, often under-explored, challenges. Such areas include, but are not limited to, pediatrics, critical care (ICU), rare diseases like Alzheimer, HIV, and fertility.
14
iclr2024_agi
# How Far Are We From AGI ## Topics This workshop aims to become a melting pot for ideas, discussions, and debates regarding our proximity to AGI. We invite submissions on a range of topics including, but not limited to: 1. **Frontiers of AGI research:** Examples include AI agents, embodied AI, retrieval-based and tool- augmented LLMs, knowledge-enhanced AI, and multi-agent AI. 2. **Classic AGI Attempts as Inspiration:** Delving into historical methods such as expert systems, symbolic AI, Type I and Type II reasoning for insights that can guide LLM research further. 3. **Interdisciplinary Insights for AGI:** Drawing parallels from fields like psychology, sociology, and neuroscience to inspire and inform the development of LLMs towards AGI. 4. **Fundamental Limitations of LLMs:** Analyzing the intrinsic capabilities or lack thereof in LLMs that might impede their progression to AGI. This includes discussions on reasoning, planning, and more. 5. **Practical Limitations of LLMs and Foundation models:** Addressing external challenges like system constraints, computational costs, data acquisition barriers, and privacy concerns. 6. **Safety, Ethics, and Regulation in AGI Development:** Exploring the complexity of moral, safety, and regulatory concerns that will shape AGI’s evolution. 7. **AGI’s Economic and Societal Impacts:** Probing the potential changes AGI might initiate into our societies, economies, and daily lives.
15
iclr2024_al4de
# AI4DifferentialEquations In Science ## Background Over the past decade, the integration of Artificial Intelligence (AI) for scientific exploration has grown as a transformative force, propelling research into new realms of discovery. The AI4DifferentialEquations in Science workshop at ICLR 2024 invites participants on a dynamic journey at the interface of machine learning and computational sciences known as Scientific Machine Learning (SciML). This workshop aims to unleash innovative approaches that harness the power of AI algorithms combined with computational mathematics to advance scientific discovery and problem solving. This enables us to push the boundaries of scientific computing beyond its traditional limits. Our goal is to delve into the latest AI advancements, particularly those that significantly enhance the efficiency of solving ordinary and partial differential equations (PDEs). These methods result in significant performance gains, which allow for solutions at high resolution that were previously unfeasible or required large amounts of computation. The AI4DifferentialEquations in Science workshop aims to unlock the full potential of data-driven approaches in advancing scientific frontiers in earth sciences, climate and computational fluid dynamics to name a few. ## Topics Key topics include but are not limited to: - Exploration of novel applications of deep learning techniques in scientific simulations of partial or ordinary differential equations. - Forward and inverse problems in PDEs to equation discovery, design optimization, and beyond, to witness the diverse applications of AI in scientific pursuits. - Explainability and interpretability of AI models in scientific contexts.
16
iclr2024_bgpt
## Bridging the Gap Between Practice and Theory in Deep Learning The success of deep learning practices has driven the rapid development of learning theory. However, recent studies have pointed out that contrasting scenarios and conclusions exist between many existing theories and their corresponding real-world applications, leading to a significant gap. This workshop aims to bridge this gap by (i) troubleshooting unnoticed gaps between learning theory and practice and (ii) narrowing the existing ones by developing new analyses. We hope that this workshop will not only raise awareness of the challenges in bridging the gap between theory and practice in deep learning but also inspire new solutions and insights that contribute to the advancement of deep learning. ## Topics The detailed topics of this workshop include (but are not limited to) the following topics: - **Optimization theory for deep learning.** Several subareas may include: Edge of Stability (EoS) phenomenon, adaptive optimizers, non-smoothness of neural network landscape, the role of initialization, architectural design, and optimization tricks in influencing the convergence. - **Generalization theory for deep learning.** Several subareas may include: the implicit bias of gradient-based optimizers, effects of overparameterization, loss landscape flatness, and more generally, how neural network architectures, data distribution, optimizers, and initialization impact the generalization performance. - **Theory of large language models.** Several subareas may include: understanding the scaling law and emergence, theory of in-context learning, theory of chain-of-thought, the expressive power of autoregressive Transformers, and more fundamentally, what the key reasons behind the success of large language models are.
17
iclr2024_dmlr
## Data-centric Machine Learning Research Large-scale foundation models are revolutionizing machine learning, particularly in vision and language domains. While model architecture received significant attention in the past, recent focus has shifted towards the importance of data quality, size, and diversity, and provenance. This workshop aims to highlight cutting-edge advancements in data-centric approaches for large-scale foundation models in new domains, in addition to language and vision, and engage the vibrant interdisciplinary community of researchers, practitioners, and engineers who tackle practical data challenges related to foundation models. By featuring innovative research and facilitating collaboration, it aims to bridge the gap between dataset-centric methodologies and the development of robust, versatile foundation models that are able to work in and across a variety of domains in service of humanity. ## Topics Topics will include, but are not limited to - Data sources for large-scale datasets - Construction of datasets from large quantities of unlabeled/uncurated data - Model-assisted dataset construction - Quality signals for large-scale datasets - Datasets for evaluation - Datasets for specific applications - Impact of dataset drifts in large-scale models - Ethical considerations for and governance of large-scale datasets - Data curation and HCI - Submissions to benchmarks such as DataPerf, DynaBench, and DataComp
18
iclr2024_dpfm
# Navigating and Addressing Data Problems for Foundation Models ## Overview Foundation Models (FMs, e.g., GPT-3/4, LLaMA, DALL-E, Stable Diffusion, etc.) have demonstrated unprecedented performance across a wide range of downstream tasks. Following the rapid evolution, as researchers strive to keep up with the understanding of the capabilities and limitations of FMs as well as their implications, attention is now shifting to the emerging notion of data-centric AI. Curation of training data is crucially important for the performance and reliability of FMs and a wealth of recent works demonstrate that data-perspective research sheds light on a promising direction toward critical issues such as safety, alignment, efficiency, security, privacy, interpretability, etc. To move forward, this workshop aims to discuss and explore a better understanding of the new paradigm for research on data problems for foundation models. ## Interested Areas We are interested in papers from the following areas: - Data Problems x Foundation Models - Data Quality, Dataset Curation, and Data Generation - Data Perspective to Efficiency, Interpretability, and Alignment - Data Perspective on Safety and Ethics - Data Copyright, Legal Issues, and Data Econom
19
iclr2024_gem
# Generative and Experimental Perspectives for Biomolecular Design ## About Biomolecular design, through artificial engineering of proteins, molecules, and nucleic acids, holds immense promise in addressing pressing medical, industrial, and environmental challenges. While generative machine learning has shown significant potential in this area, a palpable disconnect exists with experimental biology: many ML research efforts prioritize static benchmark performance, potentially sidelining impactful real-world applications. The Generative and Experimental perspectives in bioMolecular design (GEM) workshop seeks to bridge this gap by bringing computationalists and experimentalists together. Together, we will explore the strengths and challenges of generative ML in biology, experimental integration of generative ML, and pinpoint biological problems ready for ML. GEM is collaborating with Nature Biotechnology to allow exceptional submissions to be considered for fast-tracking in their journal. GEM features two tracks of submission: an in-silico generative machine learning track, and an experimental track for any papers that have wet lab results. Our lineup features renowned scientists as panelists and emerging leaders as speakers, encapsulating a spectrum from high-throughput experimentation and computational biology to generative ML. With a diverse organizing team and backed by industry sponsors, we dedicate the workshop to pushing the boundaries of ML's role in biology. ## Topics Interested topics include but are not limited to the following: - Generative ML advancements for biomolecular design with in silico results - Inverse design of all biomolecules - Modelling biomolecular data - Model interpretability - Biological problems and data ripe for generative ML and/or employment of ML for biomolecular design with wet lab experimental results. - Biological problems apt for ML applications - High-throughput data generation methods - Adaptive experimental design - Benchmarks, datasets, and oracles
20
iclr2024_genai4dm
## Generative Models for Decision Making Generative Artificial Intelligence (AI) has made significant advancements in recent years, particularly with the development of large language and diffusion models. These generative models have demonstrated impressive capabilities across various domains, such as text, image, audio, and video. Concurrently, decision making has made significant strides in solving complex sequential decision-making problems with the help of external knowledge sources. However, there remains untapped potential in combining generative models with decision making algorithms to tackle real-world challenges, particularly to improve sample efficiency of tabula rasa training by introducing priors from related domains such as visual question-answering, image captioning and image generation. This workshop aims to bring together researchers and practitioners from the fields of generative AI and decision making to explore the latest advances, methodologies, and applications. By fostering collaborations between these two domains, we intend to unlock new opportunities for addressing complex problems that lie at the intersection of both fields. ## Topics The workshop will cover a wide range of topics, including but not limited to: - **Large Language Models and Decision Making:** Exploring how large language models, such as GPT-4 and beyond, can be integrated with decision making algorithms to improve performance on complex sequential decision-making tasks. Moreover, we welcome contributions which study how to make large language model suitable for interactive and embodied settings, be it for planning, reward generation, simulation of the physical world or introducing human priors into decision making via language. Tentative research questions: which benchmarks, evaluation criteria and environments should be developed by the community to assess the utility of large language models for decision making? - **Diffusion Models and Decision Making:** Investigating the potential of diffusion models and other generative models for enhancing decision making algorithms for planning, reinforcement learning from pixels, and robotic control. Tentative research questions: can diffusion models be used as physics-aware world models, thus improving the sample efficiency of online decision making methods? - **Sample Efficiency in Decision Making:** Discussing techniques for improving sample efficiency in decision making through generative models, enabling the application of decision making in data-constrained environments. Specifically, can generative models trade reward-labelled efficiency by using more unlabelled samples? Tentative research questions: can we use large language model or video prediction models to enable faster learning on complex, open-ended decision making tasks? - **Exploration in Decision Making:** Exploring how generative models can facilitate exploration strategies in decision making, especially in high-dimensional and sparse reward settings. For instance, since generative models can efficiently represent parts of the data distribution, it is reasonable to assume that they can also provide an informative learning signal. Tentative research questions: how can pre-trained generative models help decision making agents solve long-horizon, sparse reward or open-ended tasks without a clear definition of success? - **Transfer Learning in Decision Making with Generative Models:** Investigating methods to leverage pre-trained generative models for transfer learning in decision making, enabling agents to adapt to new tasks more efficiently through a deeper understanding of the underlying dynamical system of decision making problems. Tentative research questions: do generative models used for high-level planning or low-level control transfer better to unseen domains than classical decision making methods? - **Inverse Reinforcement Learning and Imitation Learning:** Analyzing how generative models can assist IRL/IL algorithms in learning from observed behaviour, or used for data augmentation. Tentative research questions: can generative models capture richer information contained in human demonstrations than existing methods? Generative AI has led to significant advances in natural language, vision, audio, and video. Such advances can lead to fundamental changes in decision making, and with the aim for bridging generative AI with the decision making community from control, planning, and reinforcement learning, we invite submissions in this area including the following topics: - Studying how generative models can directly be used as decision making agents – i.e. a LLM agent. - Studying how generative models can algorithmically change the decision making problem – i.e. formulating decision making as reward-conditioned generative modelling or planning as inference on a generative model. - Studying how the priors in large generative models can enable sample efficiency and effective exploration. - Studying how generative models can aid the inference of the intent of a set of demonstrations (i.e. inverse reinforcement learning). - Studying how generative models can enable effective transfer learning.
21
iclr2024_globalai
# Global AI Cultures ## Description Building globally-inclusive artificial intelligence systems that encodes and respects cultural sensibilities as well as performs well for users across cultural contexts, is an important goal as we deploy AI products globally. However existing AI evaluation, design and deployment practices are not oriented towards a diversity of global cultures and we do not recognize fully the cultural values that AI amplifies. If this relationship between AI and global cultures is not examined we inadvertently could be universalizing western centered AI and create unforeseen impacts on global cultural production, values and consumption. This workshop aims to develop a shared vocabulary for contending with the cultural impacts of AI , the cultural gaps of AI and the cultural values of AI by putting into conversation AI researchers considering the technical nuances of generative AI with scholars from the humanities and social sciences that have long thought about the social and cultural impacts of new technologies. ## Themes This workshop will encourage field building on deepening our understanding for how we can build and deploy globally inclusive AI and how we can responsibly encode cultural knowledge into our technologies. It will include discussions about: - **Conceptual and Theoretical Foundations for Cultural Inclusion in AI**: What does global inclusion mean in the context of AI and what are the possibilities and challenges of building culturally inclusive AI models? - **Scalable Cultural Representation Evaluations**: How do we build evaluation and development pipelines that can test cross-cultural performance via cultural metrics such as representation, quality, impact, and inclusion at scale? - **Culturally-Rich Training Datasets**: What are the features of a culturally representative training dataset and what processes and conditions are needed in order to create or curate such training data? - **Methods to study cultural values of generativeAI**: How can we recognize and account for the different cultural values that are embedded in our AI pipelines? How do we bring our cultures of development in sync with our cultures of deployment? - **User Interactions in Support of Cultural Inclusion**: Are there creative strategies that can reparatively promote the inclusion of subjugated cultural values through UI, deployment, or public education/advocacy? - **Cultural impacts of Generative AI**: How can we understand immediate and longer-term impacts of these technologies on the culture industries? How does AI support or challenge support existing dynamics in the culture industries? Are there existing norms or principles in non-AI systems of content creation and distribution?
22
iclr2024_llm4agents
# Large Language Model (LLM) Agents ## About This Workshop delves into the significance of agents driven by large language models (LLMs), a topic that has recently sparked intense discussions. Building on the current huge progress on LLMs, we'll focus on autonomous agents that perform intricate tasks in both real and simulated environments guided by natural language instructions. What sets these agents apart is their sophisticated use of language prompts, not just as a means of communication but also as a medium for reasoning—a characteristic once thought unique to humans. ## Topics We will explore a range of topics in this workshop, including, but not limited to, the following areas: - **Memory Mechanisms and Linguistic Representation**: This session will analyze the similarities between LLMs and human memory and will discuss the mechanisms of storage and formation of the linguistic representation in LLMs. - **Tool Augmentation and Grounding (interaction with environment)**: Addressing the enhancement of LLMs through tool augmentation, this session will also include a discourse on grounding – linking natural language concepts to particular contexts. - **Reasoning, Planning, and Risks**: This session will discuss the intertwined processes of reasoning and planning in language agents and highlight the potential hazards associated with language agents' ability to autonomously operate in the real world. - **Multi-modality and Integration in Language Agents**: This session will explore how language agents can integrate multiple modalities such as vision, sound, and touch to enhance their understanding and interaction with the environment. - **Conceptual Framework for Language Agents**: This session will delve into a potential framework for language agents by drawing from both classic and contemporary AI research and related fields such as neuroscience, cognitive science, and linguistics.
23
iclr2024_mefomo
## Workshop on Mathematical and Empirical Understanding of Foundation Models Foundation models (FMs) have revolutionized machine learning research across domains. These models are trained on extensive, highly varied datasets and can be quickly adapted to solve many tasks of interest. FMs are extremely effective on language (e.g., GPT-3 , BERT, PaLM, LLaMa ), vision (e.g., SimCLR), speech (e.g., Whisper), and multi-modal (e.g., CLIP, DALL-E) inputs. However, understanding of FMs lags far behind their extraordinary performance. FMs are known for their surprising emergent capabilities, such as in-context learning , but rigorous characterization of such phenomena is sorely lacking. Recently, substantially smaller models (e.g., LLaMA) have demonstrated performance comparable to or better than huge FMs from the previous generation (e.g, OPT). These findings suggest that careful selection of data, training objectives, and adaptation methods can more effectively induce desirable properties in FMs. Development of such techniques can be accelerated through better understanding. This workshop aims to bring together researchers who work on developing an understanding of FMs, through either careful experimentation or theoretical work. Rigorous characterization of FMs can also contribute to the broader goal of mitigating undesirable behaviors. FMs are now broadly available to users, so misaligned models present real-world risk. We thus also welcome submissions of previously unpublished works that investigate how to better characterize biases in models and align them. ## Topics The workshop will focus on three main aspects of FMs: pretraining, adaptation, and emergent capabilities. These components may include, but are not limited to, the following topics. - **Pre-Training:** How do FMs learn useful representations? Supervised downstream tasks (e.g., solving math word problems) are often markedly different from the self-supervised pre-training objective. When and how does pre-training improve performance on a diverse set of downstream tasks? Possible sub-topics include: - **Understanding the data** - How does the quality of the dataset impact the power of the learned representation? - Fundamental scaling and limits: how much data do we need? Given a fixed compute budget, is it better to increase the model size or the dataset size? - What subsets of the data are most important for the performance and capabilities of foundation models? - **Loss Functions** - Vision: contrastive vs. generative vs. masked autoencoding - Language: masked language modeling, autoregressive modeling, auxiliary objectives; tokenization methods - Multi-modal: contrastive objectives, translation-driven objectives - **Model Architecture** - Effect of model scale - Attention vs recurrence (e.g., structured state-space models) - Nonparametric or semi-parametric models: retrieval-augmented models - Diffusion models vs autoregressive models - Mixture-of-experts - **Generalization, transfer, and representation learning** - Role of optimization on representation learning and transfer - Analyzing learned representations - Theory in simplified models - Training dynamics and hyperparameters at scale - **Adaptation:** How can we quickly adapt FMs? FMs are trained using unlabelled data with general-purpose objectives, so how can we effectively adapt them to meaningful downstream use cases? Possible subtopics include: - **Fine-tuning, prompting, in-context learning** - How does fine-tuning modify the pre-trained representation? - Representation-based: Multimodal representation learners admit straightforward adaptation to downstream tasks through direct manipulation of the representation space (e.g., DINO). How and when does this work? - Investigations into different prompting and decoding methods - Which examples should be inserted during in-context learning? - **Instruction Tuning** - What does instruction tuning do to the base model? How do models learn to generalize in this setting? - How can instruction tuning be made more effective? - **Model Un-Learning and Watermarking** - Given data copyright concerns, there is growing interest in ensuring that a model can “un-learn” (i.e., forget) a datapoint it was pre-trained on. What are effective methods for this? - Watermarking outputs can ensure that model generations are identifiable. What types of watermarks are effective while preserving quality? - **Safety and Alignment** - Pre-trained language models are often fine-tuned to align with human preferences. How does an aligned model differ from the base model? - How does reinforcement learning from human feedback (RLHF) work? In what cases can supervised fine-tuning achieve the same goals? - What are the safety deficiencies of current FMs? How can we effectively understand the internal works of FMs in order to better align them? - **Robustness, Calibration, and Biases** - In what cases do FMs generalize to out-of-distribution examples? Why? How can we encourage this behavior? - What kinds of biases are accumulated in FMs during pre-training? How can we later remove or mitigate these biases? - **Efficient methods** - Fine-tuning often modifies a small subspace of the model parameters. Do we really need scale during fine-tuning? Can fine-tuning be made more efficient? - Task-aware pruning and distillation methods may yield smaller, more efficient models that preserve downstream performance. How do these methods work? Can we make them more effective? - **Emergent phenomena:** Scale appears to drive qualitatively different behavior in models (e.g., in-context learning, reasoning, chain-of-thought) that can emerge suddenly during training (e.g., grokking). We lack a rigorous understanding of what increasing the scale does to the training procedure and how these desirable emergent capabilities come about. Possible subtopics include: - **Scale-driven capabilities** - Chain of Thought, reasoning, in-context learning capabilities - Improved robustness and calibration - Improved characterization of emergent capabilities - **Scaling laws** - How and why does performance scale with data, compute, and model size? - Grokking: how do new capabilities suddenly emerge during FM training?
24
iclr2024_mlgenx
# Machine Learning for Genomics Explorations ## Overview Our limited understanding of the biological mechanisms underlying diseases remains a critical bottleneck in drug discovery. As a result, we often lack insights into why patients develop specific conditions, leading to the failure of many drug candidates in clinical trials. Recent advancements in genomics platforms and the emergence of diverse omics datasets have sparked increasing interest in this field. The primary objective of this workshop is to bridge the gap between machine learning and genomics, emphasizing target identification and emerging drug modalities such as gene and cell therapies and RNA-based drugs. By fostering interdisciplinary collaboration, we aim to advance the integration of these disciplines and accelerate innovation in drug discovery. ## Subject Areas We consider a broad range of subject areas including but not limited to the following topics. All contributions introducing new ML methods to existing problems and those that highlighting and explaining open problems are welcome. We also encourage submissions related to application of molecular biology, including but not limited to, single-cell RNA analysis, bulk RNA studies, proteomics, and microscopy imaging of cells and/or tissues. - Foundation models for genomics - Biological sequence design - Interpretability and Generalizability in genomics - Causal representation learning - Perturbation biology - Modeling long-range dependencies in sequences, single-cell and spatial omics - Integrating multimodal perturbation readouts - Active learning in genomics - Generative models in Biology - Multimodal representation learning - Uncertainty quantification - Optimal transport - Experimental design for Biology - Graph neural network and knowledge graph - New datasets and benchmarks for genomics explorations - Pre-training multi-omics models - Synthetic data generation and data quality for pre-training, fine-tuning and instruction tuning - Fine-tuning (SFT, RLHF, RL with lab feedback, ...) on novel tasks - In-context learning with large-context models - Reasoning through prompt engineering or architectural design - Interpretability and uncertainty quantification - Knowledge retrieval (RAG, knowledge graph, ...) - Efficient interactive system designs (agents, humans, and biological tools) - Training/fine-tuning LLM-powered design and planning engine
25
iclr2024_pml
# Privacy Regulation and Protection in Machine Learning ## Introduction Recent advances in artificial intelligence greatly benefit from data-driven machine learning methods that train deep neural networks with large scale data. The usage of data should be responsible, transparent, and comply with privacy regulations. This workshop aims to bring together industry and academic researchers, privacy regulators and legal, policy people to have a conversation on privacy research. We hope to (re)visit major privacy considerations from both technical and nontechnical perspectives through interdisciplinary discussions. ## Topics Topics of interest include, but are not limited to, the following: - Relationship of privacy regulation (such as GDPR, DMA) to machine learning - Interpolation and explanation of data privacy - Efficient methods for privacy preserving machine learning - Federated learning for data minimization - Differential privacy theory and practice - Threat model and privacy attacks - Encryption methods for machine learning - Privacy in machine learning systems - Privacy for large language models - Relationship between privacy, transparency, auditability, verifiability - Relationship between privacy, robustness, fairness etc
26
iclr2024_pml4lrs
# Practical ML for Limited/Low Resource Settings ## Introduction The constant progress being made in machine learning needs to extend across borders if we are to democratize ML in developing countries. Adapting state-of-the-art (SOTA) methods to resource constrained environments such as developing countries can be challenging in practice. Recent breakthroughs in natural language processing and generative image models, for instance, rely on increasingly complex and large models that are pre-trained on large unlabeled datasets. In most developing countries, resource constraints make the adoption of these breakthroughs challenges. Methods such as transfer learning will not fully solve the problem either due to bias in pre-training datasets that do not reflect environments in developing countries or the cost of fine-tuning larger models. This gap in resources between SOTA requirements and developing country capacities hinders a democratic development of machine learning methods and infrastructure. The main goal of PML4LRS is to bring together researchers and practitioners (from academia, industry and government agencies) to reflect on aspects of designing, implementing, deploying and monitoring machine learning (ML) solutions that are typical in low resource environments across multiple sectors, such as healthcare, finance, agriculture, or education. Specifically, we encourage contributons that highlight issues related to: - Advances in algorithms and methods tailored for problems related with data-scarcity, imbalanced representations and limited computational resource - Industry practices to scale-up ML solutions in low resource settings while balancing performance and latency tradeoffs - Societal and policy impacts of ML solutions in developing countries obtained via pilot studies, qualitative research, and human-in-the-loop settings. ## Topics Resource constraints in developing countries can necessitate alternatives to conventional machine learning approaches. We invite submissions that address the following and related topic areas: - Algorithms and Methods - Methods for collecting and generating training data within data scarce (limited labeled data) settings (such as weak labels, model-based pre-labeling, teacher-student models, and transfer learning). - Machine learning techniques applied to limited data (e.g. active learning, few-shot and zero-shot learning). - Approaches to training and inference on resource constrained devices (such as model quantization, model compression, model distillation, low precision training, model pruning methods, and generalized model optimizations). - Alternative learning methods coupled with deep models targeted for low resources settings. - Automated techniques to stratify and valuate data in order to increase throughput in low-resource settings. - Analyse models in the perspective of fairness, explainability, etc. - Industry Experience and Applications - Data science and engineering practices that help balance accuracy/latency tradeoffs while scaling ML models in low resource environments. - Measuring success or impact that goes beyond algorithmic metrics (such as accuracy or F1 score). - Data-driven techniques that support public institutions (government transparency, healthcare, education etc). - Social and Policy Topics - Successful ML solution implementation stories which work at a small scale (e.g. local institution, city) that could be applied at larger scale. - Connecting skilled professionals with the organizations that deeply understand the local problems. - Securing funding for proof-of-concept (POC) projects or for scaling existing POCs. - Building effective research and implementation teams, with a focus on challenges specific to developing regions such as countries in Africa. - When machine learning is NOT a viable option. - Strategies and policies enabling or enhancing AI/ML adoptions for developing countries.
27
iclr2024_r2fm
# Reliable and Responsible Foundation Models ## Overview In the era of AI-driven transformations, foundation models (FMs), like large-scale language and vision models, have become pivotal in various applications, from natural language processing to computer vision. These models, with their immense capabilities, offer a plethora of benefits but also introduce challenges related to reliability, transparency, and ethics. The workshop on reliable and responsible FMs (R2-FM) delves into the urgent need to ensure that such models are trustworthy and aligned with human values. The significance of this topic cannot be overstated, as the real-world implications of these models impact everything from daily information access to critical decision-making in fields like medicine and finance. Stakeholders, from developers to end-users, care deeply about this because the responsible design, deployment, and oversight of these models dictate not only the success of AI solutions but also the preservation of societal norms, equity, and fairness. Some of the fundamental questions that this workshop aims to address are: - How can we identify and characterize unreliable and irresponsible behaviors in FMs? Topics include susceptibility to spurious features, prompt sensitivity, lack of self-consistency, and issues of nonfactuality or “hallucinations” - How should we assess the potentially harmful capabilities of FMs and quantify their societal impact? For example, how can we predict the consequences of misuse of highly capable large language models? - How can we pinpoint and understand the causes behind known or emerging sources of FM unreliability? This may involve examining training data, objectives, architectural design, learned weights, or other facets. - What principles or guidelines should inform the design of the next generation of FMs to ensure they are both reliable and responsible? - Can we establish theoretical frameworks that guarantee the reliability and responsibility of FMs? - In practical applications, how might we leverage domain-specific knowledge to guide FMs towards improved reliability and responsibility across diverse areas, such as drug discovery, education, or clinical health? ## Topics We invite submissions from researchers in the fields of reliability and responsibility pertaining to foundation models. Additionally, we welcome contributions from scholars in the natural sciences (such as physics, chemistry, and biology) and social sciences (including pedagogy and sociology) that necessitate the use of reliable and responsible foundation models In summary, our topics of interest include, but are not limited to: - Theoretical foundations of FMs and related domains - Empirical investigations into the reliability and responsibility of various FMs - In-depth discussions exploring new dimensions of foundation model reliability and responsibility - Interventions during pre-training to enhance the reliability and responsibility of FMs - Innovations in fine-tuning processes to bolster the reliability and responsibility of FMs - Discussions on aligning models with potentially superhuman capabilities to human values - Benchmark methodologies for assessing the reliability and responsibility of FMs - Issues of reliability and responsibility of FMs in broad applications
28
iclr2024_realign
# Workshop on Representational Alignment ## About Both natural and artificial intelligences form representations of the world that they use to reason, make decisions, and communicate. Despite extensive research across machine learning, neuroscience, and cognitive science, it remains unclear what the most appropriate ways are to compare and align the representations of intelligent systems (Sucholutsky et al., 2023). ## Questions In the second edition of the Workshop on Representational Alignment (Re-Align), we bring together researchers from diverse fields who study representational alignment to make concrete progress on this set of open interdisciplinary problems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to participate in the workshop, and to contribute papers that address questions of representational alignment that stem from the following central theme: When and why do intelligence systems learn aligned representations, and how can scientists and engineers intervene on this alignment? Other questions topical for this year’s workshop include: - To what extent does representational alignment indicate shared computational strategies among biological and artificial systems? - How have current alignment metrics advanced our understanding of computation, and what measurement approaches should we explore next? - How can we develop more robust and generalizable measures of alignment that work across different domains and types of representations? - How can we systematically increase (or decrease) representational alignment among biological and artificial systems? - What are the implications (positive and negative) of increasing or decreasing representational alignment between systems, on behavioral alignment, value alignment, and beyond?
29
iclr2024_setllm
# Workshop on Secure and Trustworthy Large Language Models ## About The striding advances of large language models (LLMs) are revolutionizing many long-standing natural language processing tasks ranging from machine translation to question-answering and dialog systems. However, as LLMs are often built upon massive amounts of text data and subsequently applied in a variety of downstream tasks, building, deploying and operating LLMs entails profound security and trustworthiness challenges, which have attracted intensive research efforts in recent years. The primary aim of the proposed workshop is to identify such emerging challenges, discuss novel solutions to address them, and explore new perspectives and constructive views across the full theory/algorithm/application stack. ## Topics The potential topics include but are not limited to: - Reliability assurance and assessment of LLMs - Privacy leakage issues of LLMs - Copyright protection - Interpretability of LLMs - Plagiarism detection and prevention - Security of LLM deployment - Backdoor attacks and defenses in LLMs - Adversarial attacks and defenses in LLMs - Toxic speech detection and mitigation - Challenges in new learning paradigms of LLMs (e.g., prompt engineering) - Fact verification (e.g. hallucinated generation)
30
iclr2024_ts4h
# Learning from Time Series for Health Time series data are ubiquitous in healthcare, from medical time series to wearable data, and present an exciting opportunity for machine learning methods to extract actionable insights about human health. However, huge gap remain between the existing time series literature and what is needed to make machine learning systems practical and deployable for healthcare. This is because learning from time series for health is notoriously challenging: labels are often noisy or missing, data can be multimodal and extremely high dimensional, missing values are pervasive, measurements are irregular, data distributions shift rapidly over time, explaining model outcomes is challenging, and deployed models require careful maintenance over time. These challenges introduce interesting research problems that the community has been actively working on for the last few years, with significant room for contribution still remaining. Learning from time series for health is a uniquely challenging and important area with increasing application. Significant advancements are required to realize the societal benefits of these systems for healthcare. This workshop will bring together machine learning researchers dedicated to advancing the field of time series modeling in healthcare to bring these models closer to deployment. ## Call for Papers In our Time Series for Health Workshop, we delve into the complexities of time series data to better understand and improve human health. This field boasts rich diversity, encompassing various modalities such as wearables, Electronic Health Record (EHR) data, medical time series including ECG, EEG, fMRI, and audio data. Our workshop will pivot around two central themes: Behavioral Health: Exploring the intricate dynamics of behavioral patterns and their implications on health through time series analysis. Foundation Models: Investigating the core models that form the bedrock for understanding and interpreting time series data in healthcare. These themes will be echoed in our keynote addresses, round-tables, and interactive panel discussions. Submissions that align with these themes will be given special consideration for spotlight talks. However, all submissions that meet the guidelines listed below will be considered. **Submission Guidelines** We invite papers that: - Propose innovative methods or perspectives. - Present preliminary results that open avenues for future research. - Introduce new resources like datasets to propel research in this domain. - Clearly demonstrate or discuss their relevance to healthcare, specifically focusing on challenges within health time series data. **Topics of Interest** Submissions may address, but are not limited to the follow topics as they relate to time series: - Unsupervised, semi-supervised, and supervised representation learning. - Novel architectures or models. - Classification, regression, and forecasting. - Bayesian models. - Sequential decision-making. - Challenges of time series data: missing values, noisy/irregular measurements, high-dimensionality. - Multi-modal models incorporating time series. - Deployment and implementation challenges. - Explainability, fairness, and privacy in time series models. - Practical applications (e.g., dynamic treatment recommendation for sepsis from EHR time series).
31
iclr2025_agenticai
# Towards Agentic AI for Science: Hypothesis Generation, Comprehension, Quantification, and Validation ## About the Workshop Our mission is to foster interdisciplinary collaboration to develop fully autonomous AI systems, addressing challenges like benchmark datasets, human-AI collaboration, robust tools and methods for validating AI outputs, and trustworthiness. By tackling these issues, we can unlock AI's transformative potential in research. In this workshop, themed Agentic AI for Science, we will explore these critical topics and welcome diverse perspectives. We will focus on integrating agentic AI systems to enhance scientific discovery while upholding rigorous standards. For AI to contribute effectively, it must generate novel hypotheses, comprehend their applications, quantify testing resources, and validate feasibility through well-designed experiments. This workshop serves as a vital forum for collaboration and knowledge-sharing aimed at redefining the landscape of scientific discovery. This workshop aims to address four main research thrusts to propel future research, including (non-exclusively): **Thrust 1. Design and development of agentic AI systems for scientific discovery**. The emergence of agentic AI, powered by foundation models—particularly generative models—opens up unprecedented opportunities for scientific discovery. These systems can potentially revolutionize various aspects of the scientific process, including hypothesis generation, comprehension of complex scientific phenomena, quantification, and validation. Designing and developing effective agentic AI systems for scientific discovery is both exciting and non-trivial. Pioneering work in this field has already demonstrated the promise of leveraging scientific tools, agents, and knowledge graphs. Notable examples include ChemCrow, which showcases the potential of AI in chemistry; Crispr-GPT, which applies AI to genetic engineering; and SciAgents , which illustrates the power of multi-agent systems in scientific discovery. These groundbreaking studies highlight the transformative potential of agentic AI in accelerating scientific progress and opening new avenues for research. Key research topics in this thrust include (but not limited to): - Developing scientific foundation models: Tailoring general foundation models specifically for various scientific fields to enhance relevance and accuracy. - Effective scientific tool augmentation: Enhancing existing scientific tools and methodologies with agentic AI capabilities. - Multi-agent decomposition design: Developing frameworks for scientific hypothesis generation using multiple specialized AI agents. - Human-in-the-loop agentic systems: Improving reliability and interpretability of AI-driven scientific discoveries through strategic human intervention. **Thrust 2. Theoretical foundation for scientific agentic AI**. Developing agentic scientific AI requires methods to quantify the predictions and performance of these systems, as well as to validate the scientific hypotheses they generate. A thorough investigation of agentic scientific AI systems also demands solid theoretical foundations and tools to ensure guarantees on their behavior. To analyze and evaluate such systems, we will incorporate theoretical tools in modeling, logical reasoning, model validation and diagnosis, interpretable AI, and other general methods that can provide guarantees on agentic systems. Key topics in this area include, but are not limited to, the following: - Theoretical foundation: Statistical models and theories of agentic scientific AI, such as theoretical studies on in-context learning, multi-agent communications, game theory, physics-informed hard and soft optimization constraints, and neural operators. - Logic reasoning: Inductive, deductive, and abductive reasoning; Bayesian reasoning and probabilistic programming; neural-symbolic approaches. - Model quantification, validation, diagnosis: Theory-driven metrics for quantifying AI system performance; self-evaluation of LLMs; data valuation and data-centric AI; diagnostics for data, architecture, and training processes; creation of standardized benchmarks for evaluating the validity of scientific hypothesis generation; scientific facts and hallucination. - Interpretable AI: Approaches for explaining agentic AI system behaviors; quantifying trust, safety, and transparency; mechanistic interpretability. **Thrust 3. Practical application of scientific agentic AI**. Deploying agentic AI systems in practical scientific research across diverse domains presents numerous challenges, particularly due to the need for domain-specific adaptation such as the unique data formats and model constraints of each scientific field. Bias in training data poses a significant risk, especially in sensitive domains like medicine. Trustworthiness and explainability are essential for scientists to confidently integrate AI-generated hypotheses and solutions into their research. Furthermore, ethical considerations arise when AI systems potentially automate research decisions that may impact public health, policy, or environmental outcomes, underscoring the importance of responsible AI deployment in science. - Domain-specific model adaptation: Adapting agentic AI models to handle domain-specific data formats, workflows, and tools across various scientific fields; transfer learning and data-efficient fine-tuning. - Bias detection and mitigation: Identifying and mitigating bias in training data, model design and outputs; fairness-aware AI systems for sensitive domains like healthcare and social science. - Robustness, trustworthiness and explainability: Methods for improving the transparency and explainability of agentic AI systems in scientific research; uncertainty interpretation and quantification. - Ethical considerations and responsible use of agentic AI in sensitive research areas; development of AI governance models to ensure accountability and human oversight in automated scientific workflows. **Thrust 4. Open problems and challenges on scientific agentic AI**. Despite the promising potential of agentic AI in scientific discovery, many open problems and challenges remain to be addressed. These may include: - Automatic curation of domain-specific scientific domains and integration of the knowledge into agentic AI systems. - Advanced mechanisms of multi-agent collaboration in scientific discovery, with considerations of their scalability and computational efficiency. - Continual evolution and learning of agentic AI systems; Mechanisms for updating models and improving performance based on experimental results, new data and discoveries. - Validation and reproducibility of results generated by agentic AI systems. ## Workshop Themes We invite contributions addressing the following research thrusts: - Design and Development of Agentic AI Systems: Exploring frameworks, tools, and human-in-the-loop systems for scientific discovery. - Theoretical Foundations: Developing statistical models and reasoning approaches for hypothesis validation and performance assessment. - Practical Applications: Examining domain-specific adaptations, ethical considerations, and governance frameworks for responsible deployment. - Open Problems and Challenges: Addressing issues in knowledge integration, validation, and continual improvement of agentic AI systems. ## Key Focus Areas Submissions are encouraged in the following areas (not exhaustive): - AI-driven hypothesis generation and validation. - Statistical and logical reasoning approaches. - Applications of AI in scientific experimentation. - Ethical, reproducibility, and governance challenges in AI-driven science.
32
iclr2025_ai4chl
# AI for Children: Healthcare, Psychology, Education ## About the Workshop Current AI research and applications often prioritize adult-focused solutions, while progress in AI designed specifically for children’s development, health, and education has lagged behind. Our workshop aims to spotlight this issue and bring together researchers from diverse fields to discuss the future of AI design and its applications for children. In the era of AI, developing bespoke AI systems for children holds special significance: - Advanced AI technologies, such as large language models (LLMs), have the potential to support children’s development, education, and mental health, posing a critical new frontier for research. - AI in pediatric healthcare is essential, as early diagnosis of childhood diseases can lead to timely interventions, improving prognoses and reducing infant mortality rates. - AI can also provide valuable tools helping children in low-resource countries, helping bridge gaps in education, healthcare, and other developmental supports. Our workshop will invite researchers from the fields of AI, child psychology, education, pediatrics and social good to discuss how AI, particularly new generative models like LLMs, can address the unique challenges in pediatrics, child psychology, and education. We will also explore the potential risks associated with AI applications for children. We invite submissions of papers on all topics related to Artificial Intelligence and Machine Learning for Children, not limited to AI for Pediatrics, AI for Psycology, and AI for Education. All papers will be reviewed in a double-blind process and accepted papers will be presented at the workshop. Topics of interest include (but are not limited to): - New Methods on AI for Children (Deep Learning, Representation Learning, Embodied AI, Large Language Models, Reinforcement learning, Foundation Models, etc.) - New AI Datasets and Benchmarks about Children (Pediatrics, Child Psychology, Child Development, Education, etc.) - New Viewpoint, Prospective, Case Study, Position Paper, Survey Paper about risk and opportunity for Pediatrics, Child Development, Child Education in the AI Ara
33
iclr2025_ai4mat
## About the Workshop The AI for Accelerated Materials Discovery (AI4Mat) Workshop NeurIPS 2024 provides an inclusive and collaborative platform where AI researchers and material scientists converge to tackle the cutting-edge challenges in AI-driven materials discovery and development. Our goal is to foster a vibrant exchange of ideas, breaking down barriers between disciplines and encouraging insightful discussions among experts from diverse disciplines and curious newcomers to the field. The workshop embraces a broad definition of materials design encompassing matter in various forms, such as crystalline and amorphous solid-state materials, glasses, molecules, nanomaterials, and devices. By taking a comprehensive look at automated materials discovery spanning AI-guided design, synthesis and automated material characterization, we hope to create an opportunity for deep, thoughtful discussion among researchers working on these interdisciplinary topics, and highlight ongoing challenges in the field. AI4Mat was first held at NeurIPS 2022, bringing together materials scientists and AI researchers into a common forum with productive discussion on major research challenges at the intersection of AI and materials science. Since then, AI4Mat has established itself as a leading venue for the exchange of ideas on the latest developments in the field, bridging together international academic, industry and government institutions. AI4Mat-NeurIPS-2023 highlighted the growing interest and expanding research community of this emerging field. This momentum continued with two workshops held in 2024 (AI4Mat-BOKU-2024 in Vienna and AI4Mat-NeurIPS-2024 in Vancouver) designed to further accelerate research progress. The field of AI-enabled materials discovery is increasingly propelled by a global and interdisciplinary research community, whose collaborative efforts are driving materials innovation toward tangible real-world impact across diverse applications. Inspired by these trends, we aim to focus the AI4Mat-ICLR-2025 on two major themes this year: - **How Do We Build a Foundation Model for Materials Science?** Drawing inspiration from the success of recent foundation models in language and computer vision, a plethora of scientific foundation models have been proposed, including some related to materials science and chemistry. Together, these efforts represent meaningful progress in applying the concept of foundation models to materials, but individually fall short in addressing a wide range of important materials problems. Given the relevance and growing interest in materials foundation models, we propose a discussion that centers on understanding the complex, interdisciplinary nature of foundational models for materials and how the community can contribute towards building them. To that end, we are bringing together experts from diverse institutions and backgrounds for a forum at AI4Mat-ICLR-2025. - **What are Next-Generation Representations of Materials Data?** Advancements in AI for materials science have led researchers to focus on increasingly intricate and diverse systems, bringing them closer to real-world applications. This increase in complexity has raised questions about how to efficiently represent diverse materials systems, particularly those requiring the integration of multiple data modalities. Materials representation learning remains an open problem with unique challenges to be addressed so as to enable continued progress in the development of new machine learning methods for real-world materials challenges.
34
iclr2025_ai4na
# Workshop on AI for Nucleic Acids AI4NA aims to popularize AI applications for nucleic acids and introduce nucleic acid research challenges to the broader AI community. This workshop aims to spotlight nucleic acids as the next frontier for AI research. By bringing together experts from machine learning and biology, we will explore how AI can address key challenges in nucleic acids research, such as RNA tertiary structure prediction, understanding nucleic acid interactions, and designing bespoke RNA/DNA molecules with therapeutic potential. The topics focus on applications of AI and novel AI methods for RNA and DNA research including, but not limited to: - Nucleic Acid Structure and Function: RNA secondary and tertiary structure prediction, RNA function analysis, NA interactions - Foundation and Generative Models for Nucleic Acids: (Multimodal) NA foundation models, Generative models for NAs - Nucleic Acids in Therapeutics: NA drug design and discovery, NA modification, NA mutations - Genomic Data Analysis: Genome reconstruction, Gene expression, Calling genetic variants, Pairwise and multiple NA sequence alignment, Single-cell transcriptomics and genomics
35
iclr2025_bi_align
# Workshop on Bidirectional Human-AI Alignment This workshop focuses on bidirectional Human AI alignment, a paradigm shift in how we approach the challenge of human-AI alignment, which emphasizes the dynamic, complex, and evolving alignment process between humans and AI systems. This is grounded on the "bidirectional human-AI alignment" framework (see Definition and ReadingList) derived from a systematic survey of over 400 interdisciplinary alignment papers in Machine Learning (ML), Human Computer Interaction (HCI), Natural Language Processing (NLP), more domains. Particularly, it involves two directions to maximize its benefits for human society. - Aligning AI with Humans (AI-centered perspective): focuses on integrating human specifications into training, steering, customizing, and monitoring AI systems; - Aligning Humans with AI (Human-centered perspective): aims to preserve human agency and empower humans to critically evaluate, explain, and collaborate with AI systems. ## Challenges & Goals The rapid advancements in general-purpose AI has precipitated the urgent need to align these systems with values, ethical principles, and goals that match the context of use, i.e., for individuals using an AI system, and for the holistic society at large. Traditionally, AI alignment has been viewed as a static, one-way process, with a primary focus on shaping AI systems to achieve desired outcomes and prevent negative side effect. However, as AI systems are taking on more complex decision-making roles, this **unidirectional AI alignment is inadequate to capture the dynamic, complicated, and evolving interactions between humans and AI systems**. The core objectives of this workshop are twofold: (1) broadening the current understanding of AI alignment and inviting more researchers to collectively explore the bidirectional human-AI alignment studies; (2) fostering interdisciplinary collaboration between researchers in multi-disciplinary domains, such as AI, HCI, and social sciences, creating a platform for exchange and innovation. ## Scopes & Topics This workshop aims to explore the design space of bidirectional human-AI alignment from a comprehensive view, calling for submissions from various disciplines and topics, including but not limited to (see all in Call For Papers): - Scope: Broader Definitions and clarifications of Current Alignment Research; - Opinions: Position Papers and Roadmaps for Future Alignment Research; - Specification: Representation approaches of Human Values, Behavior, Cognition, Societal Norms for AI Alignment; - Methods: Reinforcement Learning with Human Feedback, Algorithms, Interaction Mechanisms, UX Design for Alignment; - Evaluation: Benchmarks, Metrics or Human Evaluation for Multi-objective AI Alignment; - Deployment: Customizable Alignment, Steerability, Interpretability, and Scalable Oversight; - Societal Impact and Policy: Fostering an Inclusive Human-AI Alignment Ecosystem.
36
iclr2025_buildingtrust
# Workshop on Building Trust in Language Models and Applications As Large Language Models (LLMs) are rapidly adopted across diverse industries, concerns around their trustworthiness, safety, and ethical implications increasingly motivate academic research, industrial development, and legal innovation. LLMs are increasingly integrated into complex applications, where they must navigate challenges related to data privacy, regulatory compliance, and dynamic user interactions. These complex applications amplify the potential of LLMs to violate the trust of humans. Ensuring the trustworthiness of LLMs is paramount as they transition from standalone tools to integral components of real-world applications used by millions. This workshop addresses the unique challenges posed by the deployment of LLMs, ranging from guardrails to explainability to regulation and beyond. The proposed workshop will bring together researchers and practitioners from academia and industry to explore cutting-edge solutions for improving the trustworthiness of LLMs and LLM-driven applications. The workshop will feature invited talks, a panel discussion, interactive breakout discussion sessions, and poster presentations, fostering rich dialogue and knowledge exchange. We aim to bridge the gap between foundational research and the practical challenges of deploying LLMs in trustworthy, use-centric systems. ## Workshop Scope This workshop has a broad focus, including but not limited to: 1. Metrics, benchmarks, and evaluation of trustworthy LLMs 2. Improving reliability and truthfulness of LLMs 3. Explainability and interpretability of language model responses 4. Robustness of LLMs 5. Unlearning for LLMs 6. Fairness of LLMs 7. Guardrails and regulations for LLMs 8. Error detection and correction
37
iclr2025_data_problems
# Workshop on Navigating and Addressing Data Problems for Foundation Models Foundation models (FMs) have become central to modern machine learning, with data playing a crucial role in their development and sparking increased attention to data-related challenges such as curation and attribution. Adapting traditional data-centric methods to FMs is challenging due to the scale of both data and model architectures, necessitating interdisciplinary collaboration and community efforts. Building on the success of the first Data Problems in Foundation Models (DATA-FM) workshop at ICLR 2024, the second DATA-FM workshop will address persistent and emerging data-related challenges in FM deployment. While longstanding issues in data collection, curation, and synthesis remain relevant, new challenges have arisen as FMs are integrated into a growing number of applications and become increasingly multi-modal. Concurrently, the societal impact of AI has intensified, highlighting concerns such as data copyright. These evolving challenges emphasize the need for continued, focused discussions on data-related issues in FM development. Our goals include fostering a comprehensive understanding of these challenges across the entire FM pipeline and creating a platform for interdisciplinary researchers to connect, collaborate, and drive progress. We hope this workshop will serve as a catalyst for innovative solutions to critical data challenges, shaping the future of FMs and their wide-ranging applications. We encourage submissions across a wide range of topics, including but not limited to: - Data Collection and Curation for Foundation Models - Practical strategies for curating data (e.g., filtering, mixing, repairing) tailored to FM training stages. - Extending data curation techniques to Retrieval-Augmented Generation (RAG), multimodal settings, and LLM agents. - Theoretical frameworks for guiding data selection and scaling laws for foundation models. - Data Attribution, Interpretability, and Data Marketplaces - Efficient techniques for attributing model outputs to specific training data. - Evaluating and comparing data attribution methods. - Economic models for data pricing and the design of data marketplaces that ensure fair compensation. - Legal and Technical Solutions for Data Copyright Protection - Mitigation strategies and mathematical frameworks for addressing copyright issues in FM training data. - Connections between copyright, privacy, and fairness, including adaptations of techniques like machine unlearning. - Synthetic Data and Model Collapse - High-quality synthetic data generation and its impact on FM performance, robustness, and safety. - Understanding and mitigating model collapse through theoretical and empirical investigations. - Data and Society (Safety, Privacy, Fairness, and Other Social Impacts) - Improving AI safety, privacy, and fairness through data-centric approaches. - Addressing the side effects of data curation on fairness and ethics in FMs. - Benchmarks and Evaluations - Designing evaluation metrics for data-centric techniques and creating reliable dataset benchmarks for FMs. - Identifying and addressing pitfalls in existing dataset benchmarks, such as test data contamination.
38
iclr2025_delta
# Workshop on Deep Generative Model in Machine Learning: Theory, Principle and Efficacy We are excited to invite submissions to the ICLR 2025 Workshop on Deep Generative Models: Theory, Principle, and Efficacy. This workshop aims to explore challenges and opportunities in advancing the theoretical foundations and practical applications of deep generative models (DGMs). Theory topics include, but are not limited to: - Expressivity of deep generative models: investigate the expressivity of deep generative models and their performance variations across different datasets - Optimization and generalization of deep generative models - Solving stochastic processes for deep generative models - Sampling methods - Model Stability and Convergence Analysis in DGMs - Implicit Bias and Regularization in Generative Models - Robustness and Generalization Boundaries of Generative Models - Latent Space Geometry and Manifold Learning Application areas include, but are not limited to: - Improved sampling schemes - Adversarial Robustness and Defense Mechanisms - Scalability and Efficiency in High-Dimensional Generative Modeling - Multimodal Generative Modeling Algorithms - Structured Data Modeling - Generative models for scientific discovery (AI4Science)
39
iclr2025_dl4c
The thrid DL4C workshop titled "Emergent Possibilities and Challenges in Deep Learning for Code" provides a vibrant platform for researchers to share their work on deep learning for code, emphasizing emergent possibilities and challenges, for example: agentic methods for programming tasks, post-training and alignment for code, developer productivity and HCI for code, open science and responsible AI for code, and benchmarking and evaluation for code. We invite original research paper submissions from any topic that is relevant to deep learning for code. This year, we specifically welcome submissions addressing recent challenges like: - Agentic Methods for Programming Tasks Agents able to solve realistic coding tasks, such as solving GitHub issues or software developing tasks. - Post-training and Alignment for Code Alignment for code, including but not limited to how to learn from human feedback, execution feedback, and AI feedback for better code generation. - Developer Productivity and HCI for Code Adaptation of models to users’ needs to increas developer productivity, including studies on human-AI interaction for code from different disciplines (Machine Learning, Human-Computer Interaction, and Software Engineering, etc.). - Open Science and Responsible AI for Code Contributions from researchers who follow responsible AI practices and strive for openness and transparency in their work and who are willing to share their code, models, and data. We also welcome contributions from researchers interested in developing open science practices for deep learning for code. - Benchmarking and Evaluation for Code Benchmarks for code such execution-based benchmarks, code understanding, code efficiency, model-based judges, and project-level context. Other topics of interest include but are not limited to, for example: - Reinforcement Learning for Code - Data for Code - Pre-training Methods and Representation for Code - Natural Language To Code - Formal Methods for Code - Program Repair - Code Translation - Code Explanation - Code Summarization - Code Generation for Applications Beyond Code such as Reasoning, Decision Making, and Algorithmic Discovery
40
iclr2025_embodiedai
# Workshop on Embodied Intelligence with Large Language Models In Open City Environment This workshop is motivated by a fact: human beings have strong embodied intelligence in an open environment, but it is still challenging for large language models and LLM agents. Depsite some progresses on embodied AI on static and indoor environment, the LLM agents are still struggling in tasks in large-scale outdoor environment, such as navigation, search, spatial reasoning, task planning, etc. Therefore, we propose this workshop to discuss the recent advances on the related research area and looking forward to the future development. Specifically, it delves into topics of outdoor embodied intelligence, such as spatial intelligence and embodied perception, reasoning and planning, decision-making and action, multi-agent and human-agent collaboration, and the development of simulators, testbeds, datasets, and benchmarks. This comprehensive exploration of embodied LLM agents in open city environment holds the potential to advance the field of artificial intelligence and open up new applications in various domains.We also have a special poster/short paper session for those solutions that perform best in the Open Urban Environment Embodied Intelligence Competition. We would like to discuss the following topics in this workshop: (1) Spatial Intelligence and Embodied Perception with LLM Agents in Open City Environment: How LLM agents can develop a sense of space and time in open city environments. The role of embodied perception in enhancing the performance of LLM agents in outdoor environment. Techniques for integrating spatial intelligence and embodied perception for LLM agents in outdoor environment. Other related topics. (2) Reasoning and planning with LLM agents in open city environment: - How LLM agents can use reasoning to make decisions in open city environment. - Strategies for planning actions and sequences of tasks for LLM agents in city environment. - Analysis on the bias and limitations of reasoning and planning of LLM. - Other related topics. (3) Decision-making and Action with LLM agents in open city environment: - How LLM agents can make decisions based on outdoor context and goals. - Combination of large language models and small machine learning models for decision-making in outdoor environment. - Techniques for evaluating and improving the decision-making and action capabilities of LLM agents in outdoor environment. - Other related topics. (4) Multi-agent and human-agent collaboration in open environment: - How multiple LLM agents can collaborate to achieve common goals in outdoor environment. - The challenges and opportunities of human-agent collaboration in open city environment. - Strategies for designing effective multi-agent systems in open city environment - Perspectives on human-AI system for outdoor applications. - Other related topics. (5) Simulator, testbeds, datasets, benchmark for embodied LLM agent in city environment: - The development and use of simulators and testbeds for evaluating embodied LLM agents in outdoor environment. - The creation and curation of datasets for training and testing embodied LLM agents in outdoor environment. - The establishment of benchmarks and evaluation metrics for embodied LLM agents in outdoor environment. - Other related topics.
41
iclr2025_financial_ai
# Workshop on Advances in Financial AI: Opportunities, Innovations and Responsible AI The financial industry is undergoing a transformative shift fueled by rapid advancements in artificial intelligence. From algorithmic trading and fraud detection to personalized banking and investment strategies, AI is redefining how financial services operate. This workshop will bring together researchers, industry professionals, and policymakers to share the latest developments, address emerging challenges, and establish a roadmap for responsible AI integration in finance. ## Topics of Interest: Topics of interest include, but are not limited to, Generative AI with applications in finance, time-series modelling, financial datasets, multi-agent systems, and practical financial applications such as forecasting, fraud detection, risk management, and quantitative finance.
42
iclr2025_fm_wild
# Workshop on Foundation Models in the Wild In the era of AI-driven transformations, foundation models (FMs) have become pivotal in various applications, from natural language processing to computer vision. These models, with their immense capabilities, reshape the future of scientific research and the broader human society, but also introduce challenges in their in-the-wild deployments. The Workshop on FMs in the wild delves into the urgent need for these models to be useful when deployed in our societies. The significance of this topic cannot be overstated, as the real-world implications of these models impact everything from daily information access to critical decision-making in fields like medicine and finance. Stakeholders, from developers to end-users, care deeply about this because the successful integration of FMs into in-the-wild frameworks necessitates a careful consideration of many properties, including adaptivity, reliability, efficiency, and reasoning ability. ## Key Problems We Aim to Address - In-the-wild Adaptation: How can we leverage techniques such as Retrieval-Augmented Generation (RAG), In-context Learning (ICL), or Fine-tuning (FT) to adapt FMs for specific domains, such as drug discovery, education, or clinical health? - Reasoning and Planning: How can FMs be enhanced to tackle more complex in-the-wild tasks that require multi-step reasoning or decision-making, such as multi-hop question answering, mathematical problem-solving, theorem proving, code generation, or robot planning scenarios? - Reliability and Responsibility: How can FMs work reliably outside their training distribution? And how can we address issues like hallucination, fairness, ethics, safety and privacy within the society? - Practical Limitations in Deployment: How can FMs tackle challenges in practical applications, such as system constraints, memory requirements, response time demands, data acquisition barriers, and computational costs for inference-time scaling and long-context input? The Workshop on Foundation Models in the Wild@ICLR 2025 invite submissions from researchers in the fields of machine learning pertaining to foundation models and its in-the wild applications. Additionally, we welcome contributions from scholars in the natural sciences (such as physics, chemistry, and biology) and social sciences (including pedagogy and sociology) that necessitate the use of foundation models. ## Scope We welcome contributions across a broad spectrum of topics, including but not limited to: - Innovations in techniques for customizing models to individual user preferences, tasks, or domains - Advancements in the reasoning and planning abilities of FMs in complex real-world challenges - Theoretical and empirical investigations into the reliability and responsibility of various FMs - Strategies for overcoming practical limitations (e.g., memory, time, data) of FMs in broad applications - Methods for integrating multiple modalities (e.g., text, images, action) into a unified in-the-wild framework - Discussions on FM agents that perform intricate tasks through interaction with the environment - In-depth discussions exploring the in-the-wild deployments and applications of FMs - Benchmark methodologies for assessing FMs in real-world settings
43
iclr2025_fpi
# Frontiers in Probabilistic Inference: Learning meets Sampling ## About the Workshop The Frontiers in Probabilistic Inference: Sampling meets Learning (FPI) workshop at ICLR 2025 focuses on modern approaches to probabilistic inference to address the challenging and under-explored area of sampling from an unnormalized distribution. Sampling spans a wide range of difficult and timely problems from molecular dynamics simulation, and Bayesian posterior inference/inverse problems to sampling from generative models weighted by target density (e.g. finetuning, inference-time alignment). We hope to provide an inclusive and collaborative environment to discuss emerging ML methods for learning samplers and their applications to real-world problems. We aim to facilitate discussions around identifying some key challenges of learning-based approaches, compared to classical sampling approaches, along with techniques to overcome them. We will center workshop discussions around the following topics/questions: - Sampling methods and their connections to optimal transport and optimal control. - Classical sampling approaches and how learning accelerates them. - Connections between sampling methods and physics. - Understanding sampling from theoretical perspectives. - Applications of sampling to natural sciences, Bayesian inference, LLM fine-tuning, and more. We invite all submissions of original work across three different tracks: - Research Papers - Challenges and Reflections - Benchmarks and Datasets ### Research Papers Goals: The goal of the Research Papers track is to highlight all original research work in the field of sampling. Some examples of the research topics include, but aren't limited to: - Bayesian posterior inference/inverse problem. - Amortized sampling from Botlzmann densities. - Sampling from generative models (diffusion model and LLMs) weighted by target density: i.e. fine-tuning, inference-time alignment, etc. - Applications: e.g. molecular dynamics simulations, statistical physics, etc. ### Challenges and Reflections Goals: The goal of the Challenges and Reflections track is to explore setbacks, unexpected outcomes, and the valuable lessons learned from methods that didn’t achieve their intended goals. Some examples of the research topics include, but aren't limited to: - Ideas and methods that didn't make a paper but discussing the methodology and the results can provide valuable insights for future researchers. - Challenges and open problems in the field. We encourage the researchers to discuss 1. Why the current state-of-the-art research fails to address those challenges and 2. What are some of the directions that the researchers believe the community must focus on and pursue to overcome those challenges. ### Benchmarks and Datasets Goals: The goal of the Benchmarks and Datasets track is to encourage submissions of papers which highlight a dataset, tools or benchmarks that can be disseminated to the community during the workshop.
44
iclr2025_gem
# Workshop on Generative and Experimental Perspectives for Biomolecular Design Biomolecular design, through artificial engineering of proteins, molecules, and nucleic acids, holds immense promise in addressing pressing medical, industrial, and environmental challenges. While generative machine learning has shown significant potential in this area, a palpable disconnect exists with experimental biology: many ML research efforts prioritize static benchmark performance, potentially sidelining impactful real-world applications. The Generative and Experimental perspectives in bioMolecular design (GEM) workshop seeks to bridge this gap by bringing computationalists and experimentalists together. Together, we will explore the strengths and challenges of generative ML in biology, experimental integration of generative ML, and pinpoint biological problems ready for ML. GEM is collaborating with Nature Biotechnology to allow exceptional submissions to be considered for fast-tracking in their journal. GEM features two tracks of submission: an in-silico generative machine learning track, and an experimental track for any papers that have wet lab results. Our lineup features renowned scientists as panelists and emerging leaders as speakers, encapsulating a spectrum from high-throughput experimentation and computational biology to generative ML. With a diverse organizing team and backed by industry sponsors, we dedicate the workshop to pushing the boundaries of ML's role in biology. GEM has two tracks: a machine learning track, and a biology track. These topics include but are not limited to the following: ### ML track - Generative ML advancements for biomolecular design with in silico results. - Inverse design of all biomolecules - Modelling biomolecular data - Model interpretability ### Biology track - Biological problems and data ripe for generative ML and/or employment of ML for biomolecular design with wet lab experimental results. - Biological problems apt for ML applications - High-throughput data generation methods - Adaptive experimental design - Benchmarks, datasets, and oracles
45
iclr2025_haic
HAIC 2025, the First Workshop on Human-AI Coevolution, focuses on the emerging field of Human-AI Coevolution (HAIC) to understand the feedback loops that emerge through continuous human-AI coadaptation. This workshop focuses on new approaches beyond AI performance benchmarks, exploring multiple levels of analysis spanning single human-AI agent collaboration behavior to long term multiple human-AI interaction with impact across social institutions such as healthcare and criminal justice. ## Subject Areas We invite contributions that address various aspects of human-AI coevolution (HAIC) from diverse disciplines. Submissions should align with the overarching goal of the workshop, which is to explore the intricate interaction between humans and AI systems over extended periods. We welcome submissions of either (i) work that provides innovative insights, case studies, empirical analyses, and theoretical contributions addressing HAIC, (ii) position papers that make relevant arguments about HAIC, or (ii) expressions of interest in which prospective attendees describe their general background and interests in HAIC. In particular, we are interested in work that delve into the following subject areas: 1. Human-AI Interaction and Alignment - Evolution of human expectations and trust in AI systems - Design principles for aligning AI systems with human values - Ethical and societal implications of HAIC - Effects of HAIC on human autonomy and social norms 2. Algorithmic Adaptation and Robustness - Enhancements to Reinforcement Learning from Human Feedback (RLHF) - Technical frameworks for improving AI adaptability to human preferences - Strategies for reducing bias and promoting fairness in AI decision-making - Techniques for ensuring AI robustness across diverse contexts 3. Long-Term Societal Impact and Safety - Implications of HAIC on governance, policy, and public decision-making processes - Integration of AI alignment principles into socio-technological systems - Reimagining AI safety in light of dynamic human-AI interactions - Evaluating the impact of existing AI systems on future developments 4. Bidirectional Learning Beyond Performance Metrics - Exploration of how prolonged human-AI interactions shape cognition and decision-making - Revising evaluation metrics to assess AI systems through the lens of HAIC - Investigating the interplay between human behavior and AI agency 5. Shaping Collective Behavior and Learning - Examining AI's influence on group decision-making and consensus-building - Addressing the role of AI in collaborative environments such as education and policy-making - Understanding implicit biases formed through AI-mediated interactions 6. Dynamic Feedback Loops in Socially Impactful Domains - Real-time feedback mechanisms in critical contexts (e.g., healthcare, education, criminal justice) - Addressing unique demands of domain-specific AI-human interactions - The role of AI in shaping outcomes in high-stakes environments 7. Socio-Technological Bias, Norms, and Ethics - Critical analysis of how AI systems perpetuate or mitigate societal biases - Examining ethical implications of AI feedback loops in decision-making - Exploring the reshaping of social norms through AI interactions - Addressing complexities of bias in the context of HAIC We welcome submissions that provide innovative insights, case studies, empirical analyses, and theoretical contributions addressing these subjects. We welcome submissions of either (i) work that provides innovative insights, case studies, empirical analyses, and theoretical contributions addressing HAIC, (ii) position papers that make relevant arguments about HAIC, or (ii) expressions of interest in which prospective attendees describe their general background and interests in HAIC. Our aim is to facilitate interdisciplinary dialogue, foster collaboration, and advance the understanding of HAIC as a vital research area.
46
iclr2025_icbinb
# I Can't Believe It's Not Better: Challenges in Applied Deep Learning Why don’t deep learning approaches always deliver as expected in the real world? Dive deep into the pitfalls and challenges of applied deep learning. In recent years, we have witnessed a remarkable rise of deep learning (DL), whose impressive performance on benchmark tasks has led to increasing ambitions to deploy DL in real-world applications across all fields and disciplines [1, 2, 3, 4, 5]. However, despite its potential, DL still faces many challenges during deployment in dynamic, real-world conditions, exposing practical limitations that are often overlooked in controlled benchmarks. Current publication mechanisms tend to prioritize solutions that work on standard bench, lacking a platform to systematically collect real-world failure cases. Moreover, discussions about these failures are usually confined within specific domains, with limited cross-domain interaction, even though these failures may have similar underlying causes. Establishing a platform for collecting and sharing real-world challenges and failures of DL can address fundamental issues to facilitate more successful deployment of DL across domains, and enhance understanding of theoretical and empirical weaknesses in machine learning (ML) research. Building such a platform and fostering this community has been the continuous goal of our I Can’t Believe It’s Not Better (ICBINB) initiative. As DL systems have become increasingly present in everyday life also for non-scientific people, we want to put a special focus on real-world applications now. Therefore, in this proposed ICBINB workshop, we aim to explore the challenges, unexpected outcomes, and common principles underlying similar issues and failure modes encountered across various fields and disciplines when deploying DL models in real-world scenarios. We will focus the discussion on: Challenges & failure modes: We will invite papers from diverse fields including but not limited to healthcare, scientific discovery, robotics, education, equality & fairness, and social sciences to discuss the challenges and failure modes when deploying DL models for domain-specific applications as well as the underlying reasons. The failure modes may include suboptimal performance, concerns with the safety and reliability of applying DL models in unpredictable real-world applications, as well as ethical and societal challenges. Common challenges across domains & underlying reasons: We aim to discuss common reasons or patterns in challenges and failure modes across disciplines, which may include, but are not limited to, data-related issues (e.g., distribution shift, bias, label quality), model limitations (e.g., ethics, fairness, interpretability, scalability, domain alignment), and deployment challenges (e.g., computational demands, hardware constraints). This workshop forms one workshop in a series as part of the larger I Can't Believe It's Not Better (ICBINB) activities. We are a diverse group of researchers promoting the idea that there is more to machine learning research than tables with bold numbers. We believe that understanding in machine learning can come through more routes than iteratively improving upon previous methods and as such this workshop aims to focus on understanding through negative results. Previous workshops have focused on ideas motivated by beauty and gaps between theory and practice in probabilistic ML, we also run a monthly seminar series aiming to crack open the research process and showcase what goes on behind the curtain. Read more about our activities and our members here. We invite researchers and industry professionals to submit their papers on negative results, failed experiments, and unexpected challenges encountered in applying deep learning to real-world problems across industry and science. The primary goal of this workshop is to create a platform for open and honest discussion about the hurdles and roadblocks in applying deep learning. We believe that sharing these experiences is crucial for the advancement of the field, providing valuable insights that can prevent others from repeating the same mistakes and fostering a culture of transparency and learning. We invite submissions from novel, ongoing, and unpublished research that apply deep learning to various domains including, but not limited to, social sciences, biology, physics, chemistry, engineering, robotics, psychology, healthcare, neuroscience, marketing, economics, or finance. Submitted papers should contain the following four elements: - A use case that was tackled with deep learning. - A solution for this type of use case was proposed in the deep learning literature - A description of the (negative) outcome in the solution. - An investigation (and ideally an answer) to the question of why it did not work as promised by the deep learning literature. The potential reasons for failure may include but are not limited to data-related issues (e.g., distribution shift, bias, label quality, noisy measurement, quality of simulated data), model limitations (e.g., assumption violations, robustness, interpretability, scalability, representation misalignment), and deployment challenges (e.g., computational demands, hardware constraints). Besides these four points, papers will be assessed on: - Rigor and transparency in the scientific methodologies employed. - Novelty and significance of insights. - Quality of discussion of limitations. - Reproducibility of results. - Clarity of writing.
47
iclr2025_llm_reason_and_plan
# Workshop on Reasoning and Planning for Large Language Models ## About The Workshop This workshop explores the growing capabilities of large language models (LLMs), such as OpenAI's o1 model, in reasoning, planning, and decision-making, highlighting recent advances and challenges. We aim to examine how reinforcement learning methods, post-training optimization, and efficient inference techniques can further enhance LLMs' reasoning capabilities. Topics include training approach for enhancing reasoning and planning abilities, scaling inference for complex tasks, developing robust benchmarks, and extending LLMs to multi-modal and embodied environments. We will also discuss broader themes such as causal reasoning, collaborative multi-agent systems, uncertainty, and explainability to offer insights and guidance for the further development of reasoning and planning in LLMs. ## Topics The workshop will cover a range of topics, including but not limited to: 1. Training Methodologies for Enhancing Reasoning and Planning Capabilities in LLMs: We will explore the application of RL algorithms and other effective approaches in enhancing LLM reasoning and planning abilities during both pre-training and post-training stages. We will examine how techniques like Reinforcement Learning from Human Feedback (RLHF) can be adapted and expanded for efficient reasoning. Key questions include: - How can RL and other effective methods be utilized in pre-training to improve reasoning abilities? - What post-training approaches (e.g., fine-tuning, RLHF) are most effective for LLM planning tasks? - How can synthetic data generation and self-supervised training enhance LLM reasoning and planning? 2. Inference Time Scaling for Complex Reasoning Tasks: We will discuss challenges and innovations in scaling up reasoning during inference. As models become larger and tasks more complex, efficient inference mechanisms are critical. Topics of interest include: - What are the most promising methods for scaling inference times in reasoning-heavy tasks? - How can models dynamically allocate resources during inference to optimize for reasoning and planning? 3. Benchmarking Reasoning and Planning: Developing robust benchmarks for evaluating reasoning and planning in LLMs is critical to track progress. This session will address the need for new metrics and standardized tasks to assess reasoning abilities across different scenarios. Key discussions will include: - What benchmarks can accurately reflect the reasoning and planning capabilities of LLMs? - How do we design tasks that evaluate long-horizon reasoning and complex decision-making? 4. Multi-modality and Embodiment in LLMs: As LLMs increasingly integrate with multi-modal environments, reasoning across multiple data types (e.g., vision, sound, text) becomes more essential. This session will explore the application of reasoning and planning in multi-modality and embodied AI systems, including robotics and real-world interactions: - How can LLMs enhance multi-modal reasoning and planning to better interact with diverse environments? - What are the key challenges and opportunities in applying LLMs to multi-modal tasks, including those requiring embodied reasoning? 5. Exploring Broader Topics in Reasoning and Planning: In addition to the core themes mentioned above, our discussions will also encompass a broader range of emerging topics, including: - Causal Reasoning: How can LLMs move beyond pattern recognition to infer causal relationships? - Collaborative Reasoning in Multi-Agent Systems: How can LLMs enable multi-agent cooperation for distributed tasks? - Uncertainty and Robustness: How can LLMs improve reasoning under ambiguous information? - Human-in-the-Loop Systems: How can human feedback refine LLM decision-making processes? - Explainability: How can we make LLM reasoning and planning more transparent and interpretable for real-world applications? ## Scope We welcome contributions across a broad spectrum of topics, including but not limited to: - Training methodologies for enhancing reasoning and planning in LLMs - Efficient inference for complex reasoning tasks - Benchmarking reasoning and planning capabilities - Multi-modality and embodiment in LLMs - Emerging trends in LLM reasoning and planning
48
iclr2025_lmrl
# Learning Meaningful Representations of Life (LMRL) ## About this workshop Since the last time that the LMRL workshop was held at NeurIPS 2022, interest in representation learning for biology has surged, with new ideas challenging traditional approaches and sparking discussions on how best to capture the complexity of biological systems through machine learning. The availability of large-scale public DNA and RNA sequencing, protein sequences and 3D structures, mass spectrometry, and cell painting datasets (JUMP-CP, RxRx3, Human Cell Atlas) has fueled the development of numerous large-scale “foundation models” for biological data (Rozenblatt-Rosen et al. 2021; Fay et al. 2023; Chandrasekaran et al. 2023). These models aim to extract “meaningful” representations from noisy, raw and unstructured high-dimensional data to address a variety of biological questions. The AIxBio community has two important questions to answer: (i) what data, models and algorithms do we need to ensure that we extract meaningful representations (sufficient for their intended applications); and (ii) what are the appropriate methods for evaluating the quality of these embeddings, both in terms of the richness of information they capture, and their ability to generalize and improve performance on downstream tasks? We believe that the early stage of this field presents a remarkable opportunity to foster discussion, collaboration, and insight sharing through our workshop on “Learning Meaningful Representations of Life”. Our agenda will encourage discussion both about new methods for representation learning in biology as well as biologically relevant & substantive evaluations to probe the generalization capabilities of the learned representations. Building upon the themes of previous years, the workshop will focus on multiple layers of biological information: genomes, molecules, cells, phenotype and beyond. It is essential for such “meaningful representations” to not only generalize across modalities but also to capture biological information across different scales, from subcellular to multi-cellular and organism-wide processes. Harmonizing representations from molecules, proteins, cells, and tissues enables in-silico simulation of biological processes, interactions, and causal mechanisms, ultimately building towards a foundation model of AI-powered virtual cell (Bunne et al. 2024) , i.e. universal simulators of cellular function and behavior. For the LMRL workshop at ICLR 2025, our objectives are (i) to convene those engaged in learning representations within and across different modalities of biological data, (ii) to discuss cutting-edge methods for assessing and measuring the significance of learned biological representations, (iii) to create a platform for developing open-source standardization of datasets and evaluation metrics for benchmarking new methods, and (iv) to envisage potential real-world problems that could be solved with improved strategies for learning meaningful representations of life. The LMRL Workshop returns to ICLR 2025 to foster discussion and collaboration in the growing field of representation learning for biological data. With the increasing availability of large-scale biological datasets—spanning genomics, proteomics, cell imaging, and more—the development of innovative machine learning methods to extract and evaluate meaningful representations has never been more critical. This year, we aim to bring together researchers at the forefront of AI and biology to address two key questions: 1. What data, models, and algorithms are needed to extract meaningful biological representations that generalize well to downstream tasks? 2. How can we evaluate the quality and utility of these learned representations? We invite submissions on a wide range of topics, including but not limited to: - Foundation models for biological data - Multimodal representation learning - Multiscale representation learning to connect molecular and biological data - Generalizability and interpretability in biological datasets - Causal representation learning in biology - Active learning for experimental design - Generative models for molecular design - Modeling biological perturbations and their effects - Long-range dependency modeling in sequences and spatial omics - New datasets, benchmarks, and evaluation metrics
49
iclr2025_mcdc
# Workshop on Modularity for Collaborative, Decentralized, and Continual Deep Learning ## Summary While the success of large-scale deep learning models has hinged on the ``bigger is better'' approach – scaling model size and training data – this paradigm may rapidly be reaching an inflection point. Beyond the prohibitive cost of training and maintaining gigantic models, this approach exposes and exacerbates inherent flaws in the current design philosophy of machine learning systems. One of the most glaring contradictions lies in the development life cycle of these models which, once deprecated, are simply discarded in favor of new ones and are generally trained from scratch. This unsustainable practice stems from the fact that models are currently built and trained as generalist black-box monolithic systems where functionalities and emerging capabilities are intertwined in their parameters and any attempt to change a specific aspect can have unpredictable and potentially disastrous consequences for the entire model's performance (e.g., catastrophic forgetting). In stark contrast, a fundamental principle in software development is the organization of code into modular components. This allows developers to import modules and seamlessly integrate new functionalities, improving code reusability and maintainability. Similarly, biological systems provide compelling evidence for the benefits of modularity and functional specialization, such as rapid adaptation to new environments and resilience to perturbations. Despite these clear benefits, modular approaches are rarely applied in the development of machine learning models, presenting significant opportunities for innovation. **Scope and Topics:** The scope of this workshop covers all methods enabling collaborative development of modular models. This includes mixture-of-experts where each expert can be independently trained, decentralized training to share regularly information between experts, and upcycling to re-use existing models. ## Topics The workshop aims to explore new paradigms in designing neural network architectures based on modularity, functional specialization, and model recycling to enable more flexible and reusable architectures and unlock the collaborative development of large-scale models. A non-exhaustive list of topics of interest includes: - Mixture-of-Experts (MoE) Architectures: advancements in MoE for sparsely activated models, including novel training methods, efficient routing algorithms, and applications in diverse domains and modalities. - Routing of Specialized Experts (MoErging): Exploring techniques for effectively recycling and routing among pre-trained models or Parameter-Efficient Fine-Tuning (PEFT) modules as specialized experts. - Upcycling and MoE-fication: Exploring techniques for adapting existing dense models into modular frameworks, including converting monolithic architectures into MoE systems. - Model Soups and Model Merging: Investigating methods for combining independently trained checkpoints to create better and multi-task models, and understanding the theoretical foundations of model merging. - Applications of modularity: We encourage explorations of modular architectures to create more flexible and maintainable models, particularly in areas like lifelong/continual learning, machine unlearning, and compositional generalization. - Decentralized and Collaborative Training: Developing novel algorithms and engineering solutions for extremely communication-efficient collaborative and distributed training of models, modular and otherwise. - Adaptive Architectures: Designing architectures that dynamically adjust their structure and computational at runtime to modulate computational capacity based on the input data, task demands, or available resources. This includes dynamic depth, dynamic width, and conditional computation.
50
iclr2025_mldpr
# The Future of Machine Learning Data Practices and Repositories ## About this workshop Datasets are a central pillar of machine learning (ML) research—from pretraining to evaluation and benchmarking. However, a growing body of work highlights serious issues throughout the ML data ecosystem, including the under-valuing of data work, ethical issues in datasets that go undiscovered, a lack of standardized dataset deprecation procedures, the (mis)use of datasets out-of-context, an overemphasis on single metrics rather than holistic model evaluation, and the overuse of the same few benchmark datasets. Thus, developing guidelines, goals, and standards for data practices is critical; beyond this, many researchers have pointed to a need for a more fundamental culture shift surrounding data and benchmarking in ML. This workshop aims to facilitate a broad conversation about the impact of ML datasets on research, practice, and education—working to identify current issues, propose new techniques, and establish best practices throughout the ML dataset lifecycle. In particular, we highlight the role of data repositories in ML—administrators of these repositories, including OpenML, HuggingFace Datasets, and the UCI ML Repository, will contribute their perspective on how ML datasets are created, documented, and used and discuss the practical challenges of implementing and enforcing best practices on their platforms. By involving representatives from three major ML repositories and influential researchers from ML, law, governance, and the social sciences, our intent is that this workshop can serve as a catalyst for real positive changes to the ML data ecosystem. We invite submissions related to the role of data practices in machine learning, including but not limited to the following topics of interest: - Data repository design and challenges, particularly those specific to ML - Dataset publication and citation - FAIR and AI-ready datasets - Licensing for ML datasets - ML dataset search and discovery - Comprehensive data documentation - Data documentation methods for foundation models - Data curation and quality assurance - Best practices for revising and deprecating datasets - Dataset usability - Dataset reproducibility - FAIR ML models - Benchmark reproducibility - Holistic and contextualized benchmarking - Benchmarking and leaderboard ranking techniques - Overfitting and overuse of benchmark datasets - Non-traditional/alternative benchmarking paradigms
51
iclr2025_mlgenx
# Workshop on Machine Learning for Genomics Explorations Our limited understanding of the biological mechanisms underlying diseases remains a critical bottleneck in drug discovery. As a result, we often lack insights into why patients develop specific conditions, leading to the failure of many drug candidates in clinical trials. Recent advancements in genomics platforms and the emergence of diverse omics datasets have sparked increasing interest in this field. The primary objective of this workshop is to bridge the gap between machine learning and genomics, emphasizing target identification and emerging drug modalities such as gene and cell therapies and RNA-based drugs. By fostering interdisciplinary collaboration, we aim to advance the integration of these disciplines and accelerate innovation in drug discovery. This year, the workshop will feature three distinct tracks designed to welcome a diverse array of researchers in the field of machine learning and biology: the Main Track including application and ML topics, the Special Track on LLMs and Agentic AI, and the Tiny Papers Track. Papers in the main and the special tracks must be prepared and submitted as a single file: 8 pages for the paper, with unlimited pages for references, the impact statement, and appendices. Both contributions introducing new ML methods to existing problems and those that highlighting and explaining open problems are welcome. We also encourage submissions related to application of molecular biology, including but not limited to, single-cell RNA analysis, bulk RNA studies, proteomics, and microscopy imaging of cells and/or tissues. We consider a broad range of subject areas including but not limited to the following topics. Main Track: - Foundation models for genomics - Biological sequence design - Interpretability and Generalizability in genomics - Causal representation learning - Perturbation biology - Modeling long-range dependencies in sequences, single-cell and spatial omics - Integrating multimodal perturbation readouts - Active learning in genomics - Generative models in Biology - Multimodal representation learning - Uncertainty quantification - Optimal transport - Experimental design for Biology - Graph neural network and knowledge graph - New datasets and benchmarks for genomics explorations Special Track on LLMs and Agentic AI: - Pre-training multi-omics models - Synthetic data generation and data quality for pre-training, fine-tuning and instruction tuning - Fine-tuning (SFT, RLHF, RL with lab feedback, ...) on novel tasks - In-context learning with large-context models - Reasoning through prompt engineering or architectural design - Interpretability and uncertainty quantification - Knowledge retrieval (RAG, knowledge graph, ...) - Efficient interactive system designs (agents, humans, and biological tools) - Training/fine-tuning LLM-powered design and planning engine
52
iclr2025_mlmp
# Workshop on Machine Learning Multiscale Processes Given low-level theory and computationally-expensive simulation code, how can we model complex systems on a useful time scale? Fundamental laws of Nature, Standard Model of Physics, and the most applied part of it, quantum mechanics, are well established. Theoretically, the dynamics of anything starting from a hydrogen atom and all the way to Earth's climate follow those equations. The problem is complexity [Dirac 1929]. An exact computation of a modest system containing 100 atoms is still beyond the capability of modern computers. Some of the greatest scientific achievements resulted from breakthroughs in scale transitions: renormalization, density functional theory, Higgs boson, multiscale models for complex chemical systems, climate modeling, protein folding. Those achievements are highly regarded because they are impactful – but also unique and can't be readily applied to different systems. Encouraged by the recent successes, this workshop aims to enable the development of universal AI methods that would be able to find efficient and accurate approximations, and use them for some of the most pressing and high-impact scientific problems that have computational complexity as the limiting factor to an in silico solution, such as: - High-temperature superconductivity - Fusion power - Weather prediction - Living organism digital twins - Catalysts If we solve scale transition, we solve science. We are looking for contributions that will bring us closer to the building an AI that can advance from low–level theory and computationally–expensive simulation code to modeling complex systems on a useful time scale. All submissions will be evaluated based on their relevance to this goal. United by its goal, the workshop invites researchers working at all scales of nature: from the Planck length to the size of Universe, including quantum physics, chemistry, biology, materials science, mesoscopic physics, climate & weather, and astrophysics. We also look forward to cross–pollination of diverse methodologies: dimensionality reduction, manifold learning, Hamiltonian learning, PDE, ODE, symbolic reasoning, RL–based theory exploration, tuning computational models with experimental data, operator learning, physics–informed neural networks, surrogate modelling, digital twins, and more. ## Tracks ### New scientific result A normal paper that presents a new scientific result. Such papers are evaluated on a balance of novelty, significance, and technical quality. Page limit is 6 pages. Publication of code and data is encouraged, but not mandatory. Reviewers are allowed to consider open source as a positive contribution to the study significance. ### Dataset or benchmark A work that presents a new dataset or benchmark – a way to measure progress in the field. Upon paper acceptance, the dataset must be open and available to the community; source code must be released under an OSI–approved license. In terms of evaluation, technical quality and significance are the most important criteria. Page limit is 6 pages. ### Findings and open challenges This is the track for significance and novelty. Submissions can have no code and experiments at all, but the authors still carry the burden to convince the reviewers that their ideas are worth exploring. We are looking for submissions introducing and discussing overlooked scientific questions and potential future directions for a given application area. We encourage submission that address open challenges and describe: 1. Why the current research and state-of-the-art fall short for a given challenges; 2. What directions the authors believe the community can focus on to help address the open challenge. Page limit is 6 pages. Track idea by AI4AM ### Engineering Working with complex systems requires good software engineering. In this track we are looking for contributions that introduce advancements in modelling software for complex systems. Contributions can be tools, libraries, frameworks, or infrastructure. The most important criteria are technical quality and significance. The code must be released under an OSI–approved license. ### Negative result A paper that presents a thorough experimental investigation of approaches which, despite considerable effort, did not improve over the current state-of-the-art methods. Submissions should detail the experimental design, document the encountered challenges, and provide a critical analysis of the negative findings along with lessons learned to guide future research. Emphasis is placed on technical rigor, reproducibility, and the broader impact of learning from failure. Page limit is 6 pages. Publication of code and data is encouraged, but not mandatory.
53
iclr2025_nfam
# New Frontiers in Associative Memories ## About This Workshop Associative Memory (AM) is a core notion in psychology responsible for our ability to link people's names to their faces and to remember the smell of a strawberry when we see one. Mathematical formalizations of AM date back to the 1960s-1980s [...] . For instance, the celebrated Hopfield Networks of Associative Memory have made a significant impact on the communities of machine learning researchers, neuroscientists, and physicists. A recent surge of novel theoretical and practical developments [...] have reinvigorated this seemingly established field and placed it in the spotlight of modern ideas in deep learning [...] . and contemporary artificial network models of the brain [...] (see also this Quanta Magazine Article), culminating in the 2024 Nobel Prize in Physics "for foundational discoveries and inventions that enable machine learning with artificial neural networks". However, there still remain significant gaps between the language, methods, and ideas that are used in the theoretical work pertaining to this topic and mainstream machine learning literature. The main goal of our workshop is to bring together key researchers and developers working on AM from the perspectives of machine learning, computational neuroscience, statistical physics, and software engineering, to build upon the first iteration of this workshop at NeurIPS 2023 towards closing the gaps and converging to a common language, methods, and ideas. We would consider our workshop a success if it sparks enough interest from the communities of AM theorists, LLM practitioners, computational neuroscientists, and software developers, which are largely disjoint, to work together towards understanding the language and methods used by each of the sub-fields. We hope that this convergence will lead to efforts towards the development of novel architectures and algorithms uniquely suitable for Associative Memory networks, and to the integration of these modules into modern large scale AI systems. Recent developments have opened up a New Frontier for Associative Memory and Hopfield Networks. The announcement of the Nobel Prize is Physics 2024 has further placed this area of research in the spotlight. We believe that 2025 is the right time to bring this topic to ICLR. ## Scope and Related Work Associative memory is defined as a network that can link a set of features into high-dimensional vectors, called memories. Prompted by a large enough subset of features taken from one memory, an animal or an AI network with an associative memory can retrieve the rest of the features belonging to that memory. The diverse human cognitive abilities which involve making appropriate responses to stimulus patterns can often be understood as the operation of an associative memory, with the memories often being distillations and consolidations of multiple experiences rather than merely corresponding to a single event. In the world of artificial neural networks a canonical mathematical model of this phenomenon is the Hopfield network. Although often narrowly viewed as a model that can store and retrieve predefined verbatim memories of past events, its contemporary variants make it possible to store consolidated memories turning individual experiences into useful representations of the training data. Such modern variants are often trained using the backpropagation algorithm and often benefit from superior memory storage properties. Contemporary Hopfield networks can be used as submodules in larger AI networks solving a diverse set of tasks. The goal of this workshop is to discuss the existing and emerging developments of these ideas. The research topics of interest at this workshop include (but are not limited to): - Novel architectures for associative memory, Hopfield Networks, Dense Associative Memories, and related models (e.g., Krotov & Hopfield (2016), Demircigil et al. (2017), Ramsauer et al. (2020), Millidge et al. (2022), Krotov (2021), Hoover et al. (2023), Zhang et al. (2024), Krotov (2023), Dohmatob (2023)) - Hybrid memory augmented architectures, e.g., memory augmented Transformers and RNNs, networks with fast weight updates (e.g., Rae et al. (2019), Wu et al. (2022), Wang et al. (2023), He et al. (2023), Wang et al. (2024), Bulatov et al. (2024)) Energy-based models and their applications (e.g., Hoover et al. (2023a), Hoover et al. (2022), Ota & Taki (2023)) Associative Memory and Diffusion Models (e.g., Hoover et al. (2023b), Ambrogioni (2024), Pham et al. (2024), Achilli et al. (2024), Ambrogioni (2023), Biroli et al. (2024)) - Training algorithms for energy-based, or memory-based architectures (e.g., Du & Mordatch (2019), Scellier & Bengio (2017), Goemaere et al. (2023)) - The connection between associative memory and neuroscience (both insights from neuroscience for better AI, and AI-inspired neurobiological work) (e.g., Krotov & Hopfield (2021), Whittington et al. (2021), Sharma et al. (2022), Tyulmankov et al. (2023), Kozachkov et al. (2023), Kozachkov et al. (2023), Spens & Burgess (2023)) - Kernel methods and associative memories (e.g., Choromanski et al. (2020), Hoover et al. (2024), Hu et al. (2024), Iatropoulos et al. (2022)) - Theoretical properties of associative memories with insights from statistical physics, contraction analysis, control theory, etc. (e.g., Lucibello & Mezard (2024), Fachechi et al. (2018), Agliari et al. (2022)) - Multimodal architectures with associative memories - Lyapunov Functions (e.g., Cohen & Grossberg (1983), Hopfield (1984), Krotov (2021)) Sequential Hopfield networks for temporal sequences (e.g., Karuvally et al. (2022), Chaudhry et al. (2023), Wu et al. (2023)) - Other machine learning tasks (such as clustering, dimensionality reduction) with associative memories (e.g., Saha et al. (2023), Hu et al. (2024), Hu et al. (2023), Saha et al. (2024), Cabannes et al. (2023), Bhandarkar & McClelland (2023), Davydov et al. (2023)) - Energy-based Transformers (e.g., Hoover et al. (2023a)) - Applications of associative memories and energy-based models to various data domains, such as language, images, sound, graphs, temporal sequences, computational chemistry and biology, etc. (e.g., Widrich et al. (2020), Liang et al. (2022), Fürst et al. (2022), Bricken et al. (2023), Tang & Kopp (2021))
54
iclr2025_question
# Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI How can we trust large language models (LLMs) when they generate text with confidence, but sometimes hallucinate or fail to recognize their own limitations? As foundation models like LLMs and multimodal systems become pervasive across high-stakes domains—from healthcare and law to autonomous systems—the need for uncertainty quantification (UQ) is more critical than ever. Uncertainty quantification provides a measure of how much confidence a model has in its predictions, allowing users to assess when to trust the outputs and when human oversight may be needed. This workshop seeks to address the gap by defining, evaluating, and understanding the implications of uncertainty quantification for autoregressive models and large-scale foundation models. Researchers from machine learning, statistics, cognitive science, and human-computer interaction are invited to contribute through submitted papers, and structured discussions on key questions and topics: - How can we create scalable and computationally efficient methods for estimating uncertainty in large language models? - What are the theoretical foundations for understanding uncertainty in generative models? - How can we effectively detect and mitigate hallucinations in generative models while preserving their creative capabilities? - How is uncertainty affecting multimodal systems? - What are the best practices for communicating model uncertainty to various stakeholders, from technical experts to end users? - What practical and realistic benchmarks and datasets can be established to evaluate uncertainty for foundation models? - How can uncertainty estimates guide decision-making under risk ensuring safer and more reliable deployment?
55
iclr2025_re_align
# Representational Alignment Both natural and artificial intelligences form representations of the world that they use to reason, make decisions, and communicate. Despite extensive research across machine learning, neuroscience, and cognitive science, it remains unclear what the most appropriate ways are to compare and align the representations of intelligent systems (Sucholutsky et al., 2023). In the second edition of the Workshop on Representational Alignment (Re-Align), we bring together researchers from diverse fields who study representational alignment to make concrete progress on this set of open interdisciplinary problems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to participate in the workshop, and to contribute to the workshop in two ways: First, in the form of contributed papers that address questions of representational alignment that stem from the following central theme: When and why do intelligence systems learn aligned representations, and how can scientists and engineers intervene on this alignment? Other questions topical for this year’s workshop include: - To what extent does representational alignment indicate shared computational strategies among biological and artificial systems? - How have current alignment metrics advanced our understanding of computation, and what measurement approaches should we explore next? - How can we develop more robust and generalizable measures of alignment that work across different domains and types of representations? - How can we systematically increase (or decrease) representational alignment among biological and artificial systems? W- hat are the implications (positive and negative) of increasing or decreasing representational alignment between systems, on behavioral alignment, value alignment, and beyond? Second, by participating in our workshop hackathon. Since the first iteration of Re-Align workshop, there have been numerous debates around the metrics that we use to measure representational similarity, which is often taken as a measure of representational alignment (e.g., Cloos et al., 2024; Khosla et al., 2024; Lampinen et al., 2024; Schaeffer et al., 2024). As of now, there is little consensus on which metric best achieves the goal of identifying similarity between systems. The hackathon component of the workshop will be helpful in articulating the consequences of these methodologies by facilitating a common language among researchers, and as a result increase the reproducibility of research in this subdomain.
56
iclr2025_sci_fm
# Workshop on Open Science for Foundation Models ## About the Workshop Foundation models (FMs) have transformed AI research but lack scientific transparency. The SCI-FM workshop aims to address this by fostering open science, reproducibility, and the sharing of open-source models and datasets. We invite contributions that explore key aspects of FMs, such as dataset curation, evaluation methodologies, and innovative training strategies. Join us in advancing the accessibility and transparency of foundation models for the global research community. ## Scope We invite papers on topics including, but not limited to: - Open Datasets: Acquisition, curation, and synthesis of pretraining, instruction, and preference datasets through manual or algorithmic methods. Open access to instruction and preference datasets for alignment research. - Open Foundation Models: Pretraining strategies including data scaling, model architecture, multi-modal, and multi-task pretraining. Learning algorithms such as meta-learning, model fusion, model merging, and continual learning designed for open, scalable models. Inference algorithms like decoding, reasoning, search, and planning, tailored for foundation models. - Open Training Protocols: Training dynamics research on scaling laws, interpretability, complexity analysis, emergent capabilities, and phenomena like grokking. Alignment techniques including prompt tuning, prefix tuning, instruction tuning, and reinforcement learning with human/AI feedback. - Open Evaluation: Benchmark development and the creation of transparent evaluation protocols and metrics, including the open sharing of benchmark datasets and evaluation results across different foundation models. - Open Compute Efficiency Techniques: Focus on model distillation, compression, quantization, and optimizing attention or memory mechanisms for improved compute efficiency in open foundation models. - Open Multi-Modal Foundation Models: Expanding to modalities like vision, audio, and multi-modal foundation models, with extra emphasis on underexplored areas such as chemistry, medicine, and education. - Open Interactive and Agent Systems: Open development of conversational AI, interactive learning models, multi-agent systems, and integration with external tools and APIs. - Open Replication of Proprietary Systems: Efforts to replicate and openly share foundation models and systems that were previously proprietary, ensuring transparency and reproducibility for broader research and development.
57
iclr2025_scope
# Workshop on Scalable Optimization for Efficient and Adaptive Foundation Models ## About This Workshop In the rapidly evolving landscape of AI, the development of scalable optimization methods to yield efficient and adaptive foundation models has significant demand in the space of their inference service. In specific, enabling model efficiency while allowing them to be adaptable to various new downstream tasks has multifold challenges. Firstly, the model's ability to quickly learn adaptive and efficient sub-model selection on different tasks requires the capability to perform continual weight updates, compute- and memory-efficient fine-tuning, and personalized adaptation. Secondly, with the increased demand for long context understanding and reasoning, the model needs to yield such efficient adaptation with the informative usefulness of the query-specific token fetching. For instance, imagine a model that continually learns from current news events, adapting to the ever-changing global landscape by integrating up-to-date knowledge. Such models may not only need efficient fine-tuning to new incoming data stream, but also understand efficient handling of the KV cache that may keep on growing with the requirement to handle longer contextual information. Additionally, the integration of retrieval-augmented generation (RAG) into foundation models can ensure that generated content is not only relevant, but also reflects the most current knowledge while costing the prefill size to go up. Thirdly, with such growing demand for contextual adaptation, mixture of experts (MoE) models have also received significant traction that can perform test time adaptation via learned routing policy. In addition, the emergence of sub-quadratic models with constant KV states as opposed to KV caching of transformers, has opened up a new avenue of the model's adaptation ability in the context of information retention into compressive KV states. These capabilities rely on techniques for adapting foundation models, including fine-tuning, conversion, distillation, and in-context/few-shot learning. This workshop aims to capture advances in scalable, adaptive fine-tuning, calibration, and conversion to yield inference efficient quadratic and sub-quadratic foundation models, focusing on methodologies across vision, language, and multi-modal domains. Hosting this workshop at ICLR aligns with the conference’s mission to advance the frontiers of machine learning. The workshop aims to bring together interdisciplinary researchers from core ML/DL, efficient ML, computer vision, and NLP. ## Topics: The relevant topics of interest at this workshop include (but are not limited to): - Efficient Long Context Understanding - Sub-Quadratic Models for Foundational Tasks and Personalization - Quadratic to Sub-Quadratic Model Conversion - Task Specific Adaptive Foundation Models - Retrieval Augmented Generation for Efficient Contextual Processing - Efficient Sub-Quadratic Foundation Models - Adaptive Fine-Tuning for Multimodal Foundation Models - Efficient Fine-Tuning for Continual Adaptation and Personalization - Model Optimization for Latency and Throughput Efficient Inference - Adaptive Routing with Mixture of Experts
58
iclr2025_scsl
## Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions Reliance on spurious correlations due to simplicity bias is a well-known pitfall of deep learning models. This issue stems from the statistical nature of deep learning algorithms and their inductive biases at all stages, including data preprocessing, architectures, and optimization. Therefore, spurious correlations and shortcut learning are fundamental and common practical problems across all branches of AI. The foundational nature and widespread occurrence of reliance on spurious correlations and shortcut learning make it an important research topic and a gateway to understanding how deep models learn patterns and the underlying mechanisms responsible for their effectiveness and generalization. This workshop aims to address two aspects of this phenomenon: its foundations and potential solutions. ## Overview Despite the remarkable advancements towards generalizability and autonomy in AI systems, persistent challenges such as spurious correlations and shortcut learning continue to hinder the robustness, reliability, and ethical deployment of machine learning systems. These challenges arise from the statistical nature of machine learning algorithms and their implicit or inductive biases at all stages, including data preprocessing, architectures, and optimization. As a result, models rely on spurious patterns rather than understanding underlying causal relationships, making them vulnerable to failure in real-world scenarios where data distributions involve under-represented groups or minority populations. The foundational nature and widespread occurrence of reliance on spurious correlations and shortcut learning make it an important research topic and a gateway to understanding how deep models learn patterns and the underlying mechanisms responsible for their effectiveness and generalization. This workshop aims to foster a collaborative community to address these critical issues by bringing together experts from diverse fields and pushing the boundaries of current research. We will focus on promoting three key avenues: (i) the development of comprehensive evaluation benchmarks and the exploration of under-examined facets of the problem, (ii) the creation of novel solutions for building robust models that effectively tackle spurious correlations in real-world applications, and (iii) shedding light on lesser-explored aspects to deepen our understanding of the nature of these phenomena. ## Objectives Current benchmarks based on group labels offer limited guarantees of robustness, addressing only a few known spurious correlations. Additionally, human annotation of groups is not a scalable solution and may overlook spurious correlations that do not align with human perceptions. Current evaluations do not inform us about the scenarios when the spurious correlation is unknown or annotations are missing. Thus, there is a notable lack of rigorous evaluation benchmarks for assessing robustness to spurious correlations. Developing comprehensive benchmarks and also automated methods for detecting spurious correlations could significantly advance progress in this field. Moreover, many facets of developing robust models to combat spurious correlations remain inadequately explored. The investigation of spurious correlations in learning paradigms beyond supervised learning has been particularly limited. As foundation models continue to gain prominence, it becomes necessary to leverage these models not only as tools for tackling spurious correlation challenges but also as subjects of study to better understand the spurious correlations they may manifest. While the impacts of and solutions for robustness to spurious correlation and shortcut learning have been targeted more frequently, attention has recently shifted to their foundations. Recent works focus on the origins of reliance on spurious correlation and shortcut learning in DNNs. Factors such as the tendency to maximize margins, biases introduced during training with SGD, and the time difference in learning core versus spurious patterns are examples of a fundamental understanding of this phenomenon in deep learning. However, lots of open questions regarding the mechanism behind learning biases in various paradigms of AI and in different architectures and algorithms remain open. ## Topics Overall, the topics of interest for the workshop include, but are not limited to, the following: - Introducing new spurious correlation benchmarks for various fields and modalities, including multimodal data (image, text, audio, video, graph, time series, etc.) - Examining foundational large language models (LLMs) and large multimodal models (LMMs) in terms of robustness to spurious correlations - Creating new datasets to evaluate the robustness of multi-modal models - Developing new benchmarks focusing on different types of features (depending on their modality) as shortcuts - Constructing new robustness benchmarks for various applications (medical, social, industrial, geographical, etc.) - Designing new tasks and environments to study spurious correlations in reinforcement learning - Presenting new real-world scenarios and benchmarks that challenge reliance on spurious correlations and shortcut learning - Proposing new robustification methods - Finding solutions for the efficient robustification of LLMs and LMMs - Introducing new robustification methods for various paradigms, such as reinforcement learning, contrastive learning, and self-supervised learning - Proposing new algorithms for causal representation learning - Investigating novel solutions for robustness to spurious correlations in less-explored areas, such as optimization algorithms and data gathering and preprocessing schemes - Finding solutions for robustness to spurious correlation when information regarding spurious feature is completely or partially unknown - Introducing methods for robustness to spurious correlations in specific applications (medical, social, industrial, geographical, etc.) - Exploring the foundations of spurious correlations and shortcut learning - Presenting mathematical formulations that describe the issue and its origins - Studying the role of widely used gradient-descent-based optimization methods in reliance on shortcuts and improvement solutions - Exploring the effect of shortcuts and spurious features on the loss landscape
59
iclr2025_sllm
## Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference Large Language Models (LLMs) have emerged as transformative tools in both research and industry, excelling across a wide array of tasks. However, their growing computational demands especially during inference—raise significant concerns about accessibility, environmental sustainability, and deployment feasibility. At the same time, sparsity-based techniques are proving critical not just for improving efficiency but also for enhancing interpretability, modularity, and adaptability in AI systems. This workshop aims to bring together researchers and practitioners from academia and industry who are advancing the frontiers of sparsity in deep learning. Our scope spans several interrelated topics, including Mixture of Experts (MoEs), LLM inference and serving, network pruning, sparse training, distillation, activation sparsity, low-rank adapters, hardware innovations and quantization. A key objective is to foster connections and unlock synergies between traditionally independent yet highly related research areas, such as activation sparsity and sparse autoencoders (SAEs), or quantization and KV cache compression. Rather than focusing solely on efficiency, we aim to explore how sparsity can serve as a unifying framework across multiple dimensions of AI—driving advances in interpretability, generalization, and system design. By facilitating the fusion of ideas from different topics, the workshop will create new opportunities for innovation. We encourage participants to think beyond traditional constraints, exploring how different forms of sparsity can inform each other and yield new algorithms. Whether the goal is faster inference, modular architectures, or more interpretable models, our aim is to catalyze research that deepens the integration of sparsity within AI. Topics of interest include, but are not limited to: - Mixture of Experts (MoEs) and Modularity - Parameter Sparsity/Pruning - Interaction with Quantization and Distillation - Activation Sparsity for Inference - Sparsity for Interpretability - Hardware Innovation for Sparsity - Parameter Efficient Fine Tuning
60
iclr2025_ssi_fm
# Scaling Self-Improving Foundation Models without Human Supervision ## Overview The availability of internet data, while vast, is ultimately finite or at least growing at a pace that lags behind the consumption needs of foundation models (FMs) during pre-training. Perhaps as is most evident with large language models (LLMs), even today, the projected gains from scaling up pre-training on internet data are smaller than incorporating specific test-time techniques. It is projected that soon we will run out of high-quality data, worthy enough to be directly trained on via next-token prediction. Similarly, real robot data in embodied or physical intelligence problems tends to be quite limited to date. All is to say that as FMs scale in size and capability, we will soon hit a "data'' bottleneck blocking progress. To address this, machine learning techniques that enable models to self-improve, i.e., continually improve beyond their initial training data become essential. In theory, this can be done by training on self-generated or synthetic data that the same (or other models) produce. The unique challenges of self-improvement as a learning paradigm. The paradigm of training on self-generated synthetic data, or what we refer to as self-improvement, is distinct from standard supervised and reinforcement learning (RL) in several critical ways as we discuss next. These differences underscore the need for a dedicated study of these topics. In supervised learning, models are trained on high-quality annotations from humans. Moreover, for pre-training of LLMs, high-quality data is often curated in heuristic ways that are largely independent of the learning algorithm. In contrast, self-improvement frameworks rely on the model’s ability to generate its own training data (or use other models to generate this data), and thus the algorithm for data curation must now be subsumed by the learning framework. RL also involves training on model’s generations, and as a result, might appear similar to the self-improvement paradigm. However, due to its generality, a generic RL algorithm (designed to cater to all downstream RL problems) might not be tailored enough for self-improvement, which poses specific constraints and conditions on improving models. For instance, in contrast to an unpredictable external environment, the only randomness in the data generation process for self-improving foundation models in many use cases corresponds to the inherent randomness in the model's own outputs. Furthermore, RL algorithms are typically meant to optimize rewards obtained from an accurate reward oracle, which is absent in the self-improvement paradigm. Here, we can only rely on querying learned verifiers or reward models which can fail arbitrarily. In fact, unless carefully designed, self-improvement recipes can lead to model collapse with more training, which is absent in traditional RL due to the presence of a meaningful reward signal. Thus, different from RL, the self-improvement algorithms cannot naively exploit the verification-generation gap. This necessitates research on self-improvement algorithms that also adapt to errors made by the learned evaluation model. We believe that such distinctions and specificity should provide far more optimistic and tailored algorithms that are more effective than a generic RL approach. Connections to safety and alignment: In addition, we would like to clarify that this workshop is also interested in understanding self-improvement principles for advancing safety and alignment (e.g., weak to strong generalization, multi-agent debate, etc.), as well as the implications of existing self-improvement techniques on safety and alignment of these models (e.g., how can we understand behavior evolving through self-improvement training, theoretical guarantees on reliability of self-improvement training, alleviating value misalignment during self-improvement training, etc.). We realize that powerful AI models will have societal and economic implications, and are committed to encouraging the use of self-improvement methods responsibly. A part of the workshop to serve as a venue to discuss the implications of these methods for self-improvement to train models. We are also interested in understanding how self-improvement methods should be built responsibly, what testing criteria to use to understand the behavior of these methods, and how to integrate safety and alignment as primary objectives when developing self improvement methods. ## Ethics Statement We are committed to fostering responsible research and discussions around self-improvement that prioritize safety, transparency, and societal well-being. We expect most research discussions around the machine learning principles behind self-improvement methods to enhance our understanding of self-improvement as a community, which should hopefully more avenues to tackle long-term catastrophic risks posed by these methods due to an improved understanding of how they operate, where they break, where misalignment is likely to happen. We believe these discussions should not pose any immediate risks and will help the community with opening the black-box of self-improvement due to a better understanding. We think safety is also a core capability that self-improvement (as a community) must study, and will encourage workshop participants to discuss safety and ethical risks openly, and propose mitigation strategies to guide the responsible development of self-improving foundation models. This workshop will provide a place for both capabilities and safety researchers to chime into an open discussion. ## Goal of the workshop This workshop focuses on developing machine learning principles and algorithms for enabling self-improvement in foundation models. We aim to bring together communities working on foundation models, reinforcement learning and online learning, cognitive neuroscience, along with practitioners from various domains for fostering discussions and collaborations on several fundamental topics around this general theme of self-improvement, including but not limited to: - Learning objectives and algorithms; what should we learn? How should we supervise training? - Multi-agent and multi-model systems for enabling self-improvement - Training on machine-generated synthetic data without collapse - Autonomous online learning and reinforcement learning algorithms for FMs - Efficiently exploiting tools and external information for self-improvement - Theoretically characterizing conditions under which self-improvement is feasible, e.g., verification-generation gap, nature of problems where self-improvement is possible, - Using weak supervision for improving strong models - Gains from training with self-improvement algorithms at inference time (e.g., computational benefits, performance benefits, etc.) - Limits of self-improvement training (e.g., when is expert data often needed?) - Self-improvement for alignment and safety (synthetic data, test-time compute, weak-to-strong generalization) - Applications: software agents, robotic self-improvement, multi-modal systems, math, etc. We are especially interested in downstream application of self-improvement algorithms. We explicitly encourage submissions that study applications of these algorithms on downstream problem domains. The composition of our speaker and organizer set covers different application areas of interest.
61
iclr2025_synthdata
# Will Synthetic Data Finally Solve the Data Access Problem? Accessing large scale and high quality data has been shown to be one of the most important factors to the performance of machine learning models. Recent works show that large (language) models can greatly benefit from training with massive data from diverse (domain specific) sources and aligning with user intention. However, the use of certain data sources can trigger privacy, fairness, copyright, and safety concerns. The impressive performance of generative artificial intelligence popularized the usage of synthetic data, and many recent works suggest (guided) synthesization can be useful for both general purpose and domain specific applications. Will synthetic data ultimately solve the data access problem for machine learning? This workshop seeks to address this question by highlighting the limitations and opportunities of synthetic data. It aims to bring together researchers working on algorithms and applications of synthetic data, general data access for machine learning, privacy-preserving methods such as federated learning and differential privacy, and large model training experts to discuss lessons learned and chart important future directions. Topics of interest include, but are not limited to, the following: - Risks and limitations of synthetic data. - New algorithms for synthetic data generation. - New applications of using synthetic data (e.g. in healthcare, finance, gaming and simulation, education, scientific research, or - autonomous systems). - Synthetic data for model training and evaluation. - Synthetic data for improving specific model capabilities (e.g., reasoning, math, coding). - Synthetic data to address privacy, fairness, safety and other data concerns. - Evaluation of synthetic data quality and models trained on synthetic data. - Conditional and unconditional synthetic data generation. - Fine-grained control of synthetic data generation. - Data access with federated learning and privacy-preserving methods. - New paradigm of accessing data for machine learning. - Mixing synthetic and natural data.
62
iclr2025_verifai
# VerifAI: AI Verification in the Wild ## Overview This workshop explores the intersection of scale-driven generative artificial intelligence (AI) and the correctness-focused principles of verification. Formal analysis tools such as theorem provers, satisfiability solvers, and execution monitoring have demonstrated success in ensuring properties of interest across a range of tasks in software development and mathematics where precise reasoning is necessary. However, these methods face scaling challenges. Recently, generative AI such as large language models (LLMs) has been explored as a scalable and adaptable option to create solutions in these settings. The effectiveness of AI in these settings increases with more compute and data, but unlike traditional formalisms, they are built around probabilistic methods – not correctness by construction. In the VerifAI: AI Verification in the Wild workshop we invite papers and discussions that discuss how to bridge the fields of formal analysis and artificial intelligence. Potential angles include, but are not limited to the following: - Generative AI for formal methods: Formal methods offer strong guarantees of desired or undesired properties, but they can be challenging to implement. When faced with nonhalting proofs or extensive search spaces, machine learning approaches can help guide those search processes effectively and LLMs may even write the theorems themselves. How can we further integrate AI to enhance verification practices? How can we ensure that AI-generated test conditions align with actual desired properties? - Formal methods for generative AI: Generative AI can benefit from formal methods, which provide assurance and thus build trust. For example, satisfiability solvers can be used as a bottleneck in reasoning domains, code generated by the model can be annotated with specifications for program analysis tools to ensure its correctness, and even simple symbolic methods such as automata simulators can steer AI generations towards more logically consistent behavior. How else can we integrate formal methods into generative AI development and usage? - AI as verifiers: Hard guarantees can be notoriously rigid and difficult to achieve. In these cases, probabilistic methods are appealing alternatives to provide “soft assurances”. How can we develop more robust and trustworthy verifiers from probabilistic methods? In what settings is it appropriate to make verification more flexible using probabilistic methods? - Datasets and benchmarks: The advancement of research at the intersection of generative AI and formal methods relies heavily on the availability of robust datasets and benchmarks. We welcome papers that present new datasets and benchmarks in reasoning, theorem proving, code generation, and related areas. How can we design benchmarks that accurately reflect the challenges in combining probabilistic models with formal (or informal) verification? - Special Theme: LLMs for Code Generation: The use of LLMs for code generation has grown significantly, with a growing body of research advocating for the integration of formal structures and tools, such as context-free grammars , static analyzers , and SMT-guided repair . These methods aim to improve both the safety and effectiveness of code generated by LLMs, particularly for low-resource programming languages. In the context of code generation, LLM agents can leverage tool use and learning from execution feedback to validate their generations. This year, our special theme invites researchers to explore how techniques from programming languages and formal methods communities can further enhance LLM-driven code generation. We welcome novel methodologies, analytic contributions, works in progress, negative results, and positional papers that will foster discussion.
63
iclr2025_wmark
# Workshop on GenAI Watermarking A Dedicated Space for Watermarking in Generative AI Watermarking is crucial in the age of generative AI, yet often gets lost in broader conversations around AI safety and security. Our workshop brings together experts from academia, industry, policy, and different communities to discuss advancements and challenges in watermarking technologies. ## Topics - Algorithmic advances and new applications of watermarking - Adversarial robustness and security-related topics - Evaluation and benchmarks - Industry requirements - Policy, regulations and ethics landscapes
64
iclr2025_world_models
# Workshop on World Models: Understanding, Modelling and Scaling ## Workshop Scope The concept of the "World Model" focuses on how intelligent agents can understand and model the external interactive worlds/environments to improve their decision-making and planning abilities. World models were initially focused on modelling low-level physical quantities and interactions by recurrent neural networks (RNNs). Over time, the "World Models" concept has expanded to real-world simulation (e.g. Sora and Genie) and the generation of complex, realistic, and high-dimensional environments. This workshop explores classical World Modelling backbones for understanding and modelling the world, such as Transformers, RNNs, state-space models (SSMs), spatial-temporal modelling and causality analysis. Building from these foundational topics, the workshop will also discuss the broader and evolving concept of "World Models" for complex real-world prediction and simulation, like video/text generation and more specific applications like embodied AI, healthcare and sciences. This evolution highlights the growing complexity and capabilities of World Models. By bringing together leading researchers, the workshop will cover both classical and cutting-edge techniques, and discuss how World Models can be applied across a wide range of emerging applications. Some of the fundamental questions and specific challenges that this workshop aims to address are: - Understanding the World and Extracting Knowledge. - World Model Training and Evaluation. - Scaling World Models Predictions Across Language, Vision, and Control. - World Models in General Domains: Embodied AI, Healthcare, Natural and Social Sciences, and Beyond. The workshop covers the widest range of World Models topics, including understanding, modelling, as well as scaling with cutting-edge generative AI. We welcome submissions related to the construction, analysis and applications of world models, such as Model-Based Reinforcement Learning, Causality, Sequential Modelling, Simulation of the Environment, Diffusion Models, Video Generation, Foundation World Models, 2D to 3D, Robotics, and Embodied AI etc. We also encourage submissions from the Natural Sciences (e.g., physics, chemistry, biology) and Social Sciences (e.g., pedagogy, virtual sociology simulation) related to world/environment construction in the science domain to offer attendees a more comprehensive perspective. In summary, topics of interest mainly include, but are not limited to: - Understanding World Rules: Exploring how World Models capture environment dynamics, causal understanding, spatial-temporal patterns, model-based RL, and theoretical foundations for simulation and prediction. - World model training and evaluation: strengths, limitations, and challenges of current modelling architectures (e.g. Transformers, RNNs, and SSMs), training algorithms (autoregressive training, diffusion modelling, RL, and normalizing flow) and dataset construction. - Scaling World Models prediction and generation across language, vision, and control: Investigating how integrating visual, auditory, and textual data improves realism World Models. - World Models in general domains: Exploring World Models in robotics, AI, healthcare, natural and social sciences, and beyond to improve prediction and decision-making. - Benchmark, Dataset, and Demonstration about World Models such as environment simulation.
65
iclr2025_wrl
# Robot Learning Workshop: Towards Robots with Human-Level Abilities ## About This Workshop The year 2024 has seen an explosion of interest in humanoid robots. However, recent systems for drone racing, playing table tennis, and others clearly demonstrate that the humanoid form-factor isn’t a requirement for human-level performance. In the 7th Robot Learning workshop, to be held at ICLR 2025, we will look beyond the humanoid embodiment and ask: how far are we from robots with human-level abilities? What do we need to improve about embodied learning, decision-making, perception, and data collection to train generally physically capable robots to robustly perform a wide range of activities such as cooking or tidying up a house – activities that people do without much thinking? We believe many of the weaknesses of the current robotic systems to be a reflection of the shortcomings of general AI methods and models. As such, we seek diverse perspectives on the workshop theme from robotics-focused and robotics-orthogonal parts of the ICLR community alike, scientific contributions from academia and industry, as well as participants from a variety of backgrounds and career stages. We welcome submissions of original research papers as well as systems papers focusing on algorithmic innovations, theoretical advancements, system design, or practical applications relevant to the workshop theme. Specific areas of interest include but are not limited to: - Novel ML algorithms and model architectures for robot control: techniques integrating large multi-modal models, sim-to-real bridging, safe policy optimization, and data efficiency. - Human-robot interaction and collaboration: socially aware motion planning, adaptive interfaces, and trust-building strategies for seamless teamwork. Hardware innovations and system integration: advanced sensing and actuation, high-degree-of-freedom controllers, energy-efficient designs, and cohesive robotics architectures. - Simulation, benchmarking, and evaluation methodologies: realistic simulation environments, standardized task suites, robust metrics, and cross-domain validation protocols. - Applications in unstructured and dynamic environments: household assistance, mobile manipulation, industrial automation, healthcare, disaster response, and other real-world domains.
66
iclr2025_wsl
# Workshop on Neural Network Weights as a New Data Modality ## Overview The recent surge in the number of publicly available neural network models—exceeding a million on platforms like Hugging Face—calls for a shift in how we perceive neural network weights. This workshop aims to establish neural network weights as a new data modality, offering immense potential across various fields. We plan to address key dimensions of weight space learning: - Weight Space as a Modality - Characterization of weight space properties such as symmetries (e.g., permutations, scaling, and beyond). - Weight space augmentations, scaling laws, model zoo datasets, etc. - Weight Space Learning Tasks/Learning Paradigms - Supervised approaches: Weight embeddings, meta-learning networks, (graph) hyper-networks. - Unsupervised approaches: Autoencoders or hyper-representations. - Weight space learning backbones: Plain MLPs, transformers, equivariant architectures (e.g., GNNs and neural functionals). - Theoretical Foundations - Expressivity of weight space processing modules. - Theoretical analysis of model weight properties. - Generalization bounds of weight space learning methods. - Model/Weight Analysis - Inferring model properties and behaviors from their weights. - Investigating neural lineage and model trees through weights. - Learning dynamics in population-based training. - Interpretability of models via their weights. - Model/Weight Synthesis and Generation - Modeling weight distributions to facilitate weight sampling. - Generating weights in the context of transfer learning, learnable optimizers, implicit neural representation (INR) synthesis. - Model operations/editing (e.g., model merging, model soups, model pruning, task arithmetic). - Meta-learning and continual learning using model weights. - Applications of Weight Space Learning - Computer vision tasks: Using NeRFs/INRs. - Applications to physics and dynamical system modeling. - Backdoor detection and adversarial robustness in weight space. Weight space learning remains a nascent and scattered research area. Our goal is to provide a bridge between the abovementioned topics, and research areas such as model merging, neural architecture search, and meta-learning. By aligning terminology and methodologies, we aim to drive sustained progress and foster interdisciplinary collaboration. ## Research Goals and Key Questions This workshop will explore fundamental questions about weight spaces, such as: - What properties of weights, such as symmetries and invariances, present challenges or can be leveraged for optimization, learning and generalization? - How can model weights be efficiently represented, manipulated, and used for downstream tasks? - What model information can be decoded from model weights? - Can model weights be generated for specific applications, to make training and model selection more efficient? - Can weight space learning benefit research in processing and synthesising neural fields, for e.g. scientific applications and 3D vision? - How can we democratize the usage of weight spaces, enabling more efficient research progress?
67
iclr2025_xai4science
# XAI4Science: From Understanding Model Behavior to Discovering New Scientific Knowledge ## About This Workshop Machine learning (ML) models are impressive when they work but they can also show unreliable, untrustworthy, and harmful dangerous behavior. Yet, such models are widely adopted and deployed, even though we do not understand why they work so well and fail miserably at times. Such rapid dissemination encourages irresponsible use, for example, to spread misinformation or create deep fakes, while hindering the efforts to use them to solve pressing societal problems and advance human knowledge. Ideally, we want models to help us improve our understanding of the world and, at the very least, we want them to aid human knowledge and help us to further enrich it. Our goal in this workshop is to take a step in this direction by bringing together researchers working on understanding model behavior and using it to discover new human knowledge. The workshop will include theoretical topics on understanding model behavior, namely interpretability and explainability (XAI), but also three distinct scientific application areas: weather and climate, healthcare, and material science (ML4Science). ## Topics A-priori (i.e., ante-hoc) interpretability and self-explainable models for understanding model’s behaviour A-posteriori (i.e., post-hoc) interpretability and attribution methods for understanding model’s behaviour, including methods for evaluating the accuracy of post-hoc interpretability and attribution Practical use of interpretability and explainability for knowledge discovery in - ⁠Weather and climate science, - ⁠Material science, and - ⁠⁠Healthcare
68
icml2023_aihci
# Artificial Intelligence and Human-Computer Interaction ## Workshop Overview Artificial intelligence (AI) and Human-Computer Interaction (HCI) share common roots: early work on conversational agents has laid the foundation for both fields. Despite sharing a common history, economic and political influences have driven these fields to remain separate in subsequent decades. The recent rise of data-centric methods in machine learning has propelled few-shot emergent AI capabilities, resulting in a raft of practical tools. In particular, modern AI techniques now power new ways for machines and humans to interact. Recently, a wave of HCI tasks has been proposed to the machine learning community, which direct AI research by contributing new datasets and benchmarks, and challenging existing modeling techniques, learning methodologies, and evaluation protocols. Machine learning techniques have been developed for a variety of tasks, including user-interface understanding, UI generation, accessibility, and reinforcement learning from human feedback. This workshop offers a forum for researchers to discuss these new research directions, identify important challenges, showcase new computational and scientific ideas that can be applied, share datasets/tools that are already available, or propose those that should be further developed. ## Topics We invite researchers both in academia as well as industry who work in the areas including but not limited to: - User interface modeling for understanding and generation - Reinforcement learning with human feedback (RLHF) - Explainable and interpretable machine learning methods - Generative AI and creativity tools - Human evaluation methods - Personalizable and correctable machine learning models - Novel human interactions with models - Active learning and human-in-the-loop systems - Ethics and fairness-based models, interactions, and evaluations - Tools and datasets to accelerate works at the intersection of HCI and AI - Challenges in working at the intersection of AI and HCI
69
icml2023_deploygenai
## Challenges of Deploying Generative AI Generative modeling has recently gained massive attention given high-profile successes in natural language processing and computer vision. However, there remain major challenges in deploying generative models for real-world impact in domains like healthcare and biology. This is a challenging agenda that requires collaboration across multiple research fields and industry stakeholders. This workshop aims to advance such interdisciplinary conversations around challenges in deploying generative models – the lessons learned by deploying large language models could be impactful for other high stakes domains. Specifically, we will solicit contributions that prioritize (1) Multimodal capabilities in generative modeling, (2) Deployment critical features in generative models such as Safety, Interpretability, Robustness, Ethics, Fairness and Privacy, and (3) Human facing evaluation of generative models. ## Topics We seek papers from all topics related to recent advances in Generative AI, including all data modalities including language and vision models. We especially encourage submissions that focus on challenges when applying Generative AI to impactful, real-world, interdisciplinary problems. Potential topics can include, but are not limited to: - Applications to challenging real-world problems - Interpretability, Fairness, Robustness, and Safety - Memorization, Unlearning, and Privacy - Multi-modal generation - Technical challenges of deployment and implementation - Evaluation methodologies, metrics, human-facing evaluations - Novel methods and architectures
70
icml2023_differentiable
## Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators Gradients and derivatives are integral to machine learning, as they enable gradient-based optimization. In many real applications, however, models rest on algorithmic components that implement discrete decisions, or rely on discrete intermediate representations and structures. These discrete steps are intrinsically non-differentiable and accordingly break the flow of gradients. To use gradient-based approaches to learn the parameters of such models requires turning these non-differentiable components differentiable. This can be done with careful considerations, notably, using smoothing or relaxations to propose differentiable proxies for these components. With the advent of modular deep learning frameworks, these ideas have become more popular than ever in many fields of machine learning, generating in a short time-span a multitude of “differentiable everything”, impacting topics as varied as rendering, sorting and ranking, convex optimizers, shortest-paths, dynamic programming, physics simulations, NN architecture search, top-k, graph algorithms, weakly- and self-supervised learning, and many more. ## Scope The technical topics of interest at this workshop include (but are not limited to): - Continuous relaxations of discrete operations and algorithms (e.g., argmax, sorting, ranking, rendering, shortest-path, optimizers, if-else constructs, indexing, top-k, logics, etc.) - Stochastic relaxations and gradient estimation methods (e.g., stochastic smoothing) - Weakly- and self-supervised learning with differentiable algorithms, e.g., ranking supervision - Optimization with diff. algorithms, e.g., regression of scene parameters via diff. rendering - Systematic techniques for making discrete structures differentiable, e.g., smoothing - Differentiable simulators such as differentiable fluid dynamics, differentiable particle simulators, differentiable optics, differentiable protein-folding, differentiable cloth simulations, etc. - Differentiable architecture search, e.g., convolutions with diff. and learnable kernel sizes - Applications of differentiable relaxations, e.g., in learning-to-rank and computer vision The workshop does not cover “differentiable programming”, i.e., the programming paradigm of automatic differentiation and its technical implementations. Instead, the workshop covers cases where vanilla automatic differentiation fails or does not yield meaningful gradients.
71
icml2023_dp4ml
## Duality Principles for Modern Machine Learning The ICML Duality Principles workshop brings together researchers working on various duality concepts from many different fields to discuss new applications for modern machine learning, especially focusing on topics such as model understanding, explanation, and adaptation in deep learning and reinforcement learning. Duality is a pervasive and important principle in mathematics. Not only has it fascinated researchers in many different fields but it has also been used extensively in optimization, statistics, and machine-learning, giving rise to powerful tools such as - Fenchel duality in convex optimization, - Representer theorems in kernel methods and Bayesian nonparametrics, - Dually-flat spaces in information geometry. Duality played an important role in the past, but lately we do not see much work on duality principles, especially in deep learning. For example, Lagrange duality can be useful for model explanation because it allows us to measure sensitivity of certain perturbations, but this is not yet fully exploited. This slowdown is perhaps due to a growing focus on nonconvex and nonlinear problems where duality does not seem to be directly applicable. ## Topics We invite submissions of papers related (but not limited) to the following topics: **Theory of duality principle:** - Representer theorems - Lagrange and Fenchel dualities, generalized conjugacy and abstract convexity - Duality on manifolds or metric spaces, geodesic convexity - Convex relaxations and duality for nonconvex problems - Duality for optimization over measures, duality in optimal transport - Other dualities in mathematics, e.g., information geometry, algebraic geometry **Practical applications of duality principle:** - Fast knowledge adaptation and transfer, lifelong learning, few shot learning etc. - Model understanding, explanation and interpretation - Differentiable programming, smoothing discontinuous/discrete maps - Reinforcement learning, control, and deep learning in general
72
icml2023_fl
# Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities ## About the Workshop Proposed in 2016 as a privacy enhancing technique, federated learning and analytics (FL & FA) made remarkable progress in theory and practice in recent years. However, there is a growing disconnect between theoretical research and practical applications of federated learning. This workshop aims to bring academics and practitioners closer together to exchange ideas: discuss actual systems and practical applications to inspire researchers to work on theoretical and practical research questions that lead to real-world impact; understand the current development and highlight future directions. To achieve this goal, we aim to have a set of keynote talks and panelists by industry researchers focused on deploying federated learning and analytics in practice, and academic research leaders who are interested in bridging the gap between the theory and practice. Topics of interest include, but are not limited to, the following: **Federated Learning and Analytics:** - Scalable and robust federated machine learning systems. - Novel cross-device and cross-silo production applications. - Training, fine-tuning, and personalizing (foundation) models in federated settings. - Federated analytics vs. federated learning: synergies and differences in algorithms and systems (characteristics, constraints, and orchestration). - Approaches for addressing distribution shifts and continual learning in federated settings. - Autotuned federated algorithms for hyperparameters, model architectures, etc. - Federated learning and analytics as part of an AI lifecycle. - Open-source frameworks and community for federated learning and analytics. - Theoretical studies with realistic assumptions for practical settings. **Privacy and Security in Federated Settings:** - Differential privacy and other privacy-preserving technologies in federated settings. - Privacy attacks and empirical privacy auditing techniques in federated contexts. - Security attacks and defenses in federated settings. - Multi-party computation protocols & trusted execution environments for federated computations. **Decentralized Networks and Trustworthiness:** - Challenges in fully decentralized networks compared to federated settings. - Trustworthy decentralized learning at scale. **Fairness, Responsibility, and Social Impact:** - Fairness and responsible models in federated settings. - Social impact and privacy policies in federated settings.
73
icml2023_frontiers4lcd
## New Frontiers in Learning, Control, and Dynamical Systems Recent advances in algorithmic design and principled, theory-driven deep learning architectures have sparked a growing interest in control and dynamical system theory. Complementary, machine learning plays an important role in enhancing existing control theory algorithms in terms of performance and scalability. The boundaries between both disciplines are blurring even further with the rise of modern reinforcement learning, a field at the crossroad of data-driven control theory and machine learning. This workshop aims to unravel the mutual relationship between learning, control, and dynamical systems and to shed light on recent parallel developments in different communities. Strengthening the connection between learning and control will open new possibilities for interdisciplinary research areas. ## Topics We invite submissions related (but not limited) to the following topics: - Optimal Transport - Stochastic Processes - Stochastic Optimal Control - Dynamical Probabilistic Inference, e.g., MCMC, Variational Inference - Diffusion Models - Neural ODEs, SDEs, or PDEs - Reinforcement Learning
74
icml2023_ilhf
## Interactive Learning with Implicit Human Feedback Systems that can learn interactively from their end-users are quickly becoming widespread in real-world applications. Typically humans provide tagged rewards or scalar feedback for such interactive learning systems. However, humans offer a wealth of implicit information (such as multimodal cues in the form of natural language, speech, eye movements, facial expressions, gestures etc.) which interactive learning algorithms can leverage during the process of human-machine interaction to create a grounding for human intent, and thereby better assist end-users. A closed-loop sequential decision-making domain offers unique challenges when learning from humans -– (1) the data distribution may be influenced by the choices of the algorithm itself, and thus interactive ML algorithms need to adaptively learn from human feedback, (2) the nature of the environment itself changes rapidly, (3) humans may express their intent in various forms of feedback amenable to naturalistic real-world settings, going beyond tagged rewards or demonstrations. By organizing this workshop, we attempt to bring together interdisciplinary experts in interactive machine learning, reinforcement learning, human-computer interaction, cognitive science, and robotics to explore and foster discussions on such challenges. We envision that this exchange of ideas within and across disciplines can build new bridges, address some of the most valuable challenges in interactive learning with implicit human feedback, and also provide guidance to young researchers interested in growing their careers in this space. ## Topics Some potential questions we hope to discuss at this workshop are listed below: - When is it possible to go beyond reinforcement learning (with hand-crafted rewards) and leverage interaction-grounded learning from arbitrary feedback signals where grounding for such feedback could be initially unknown, contextual, rich and high-dimensional? - How can we learn from natural/implicit human feedback signals such as natural language, speech, eye movements, facial expressions, gestures etc. during interaction? Is it possible to learn from human guidance signals whose meanings are initially unknown or ambiguous? Even when there is no explicit external reward? - How should learning algorithms account for a human’s preferences or internal reward that is non-stationary and changes over time? How can we account for non-stationarity of the environment itself? - How much of the learning should be pre-training (i.e. learning for the average user) versus how much should it be interactive or personalized (i.e. for finetuning to a specific user)? - How can we develop a better understanding of how humans interact with/ teach other humans or machines? And how could such an understanding lead to better designs for learning systems that leverage human signals during interaction? - How to design intrinsic reward systems that could push agents to (learn to) become socially integrated/coordinated/aligned with humans? - How can well-known design methods from HCI (such as ability-based design) be imported and massively used in AI/ML? What is missing from today’s technological solution paradigms that can allow for ability-based design to be deployed at scale? How can the machine learning community assist HCI and accessibility research communities to build adaptive learning interfaces targeting a wide range of marginalized and specially-abled sections of society? - What are the minimal set of assumptions under which learning from arbitrary/implicit feedback signals is possible for the interaction-grounded learning paradigm?
75
icml2023_imlh
## Interpretable Machine Learning in Healthcare Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of the existing ML approach inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could ultimately facilitate the deployment. On the other hand, it is also essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more aligned with clinical reasoning. Also, it may help mitigate biases in the learning process or identify more relevant variables for making medical decisions. In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, and clinical fields to facilitate discussions including related challenges, definitions, formalisms, and evaluation protocols regarding interpretable medical machine intelligence. Additionally, we will seek possible solutions such as logic and symbolic reasoning over medical knowledge graphs, uncertainty quantification, composition models, etc. We hope that the proposed workshop is fruitful in offering a step toward building autonomous clinical decision systems with a higher-level understanding of interpretability. ## Topics Topics include (but are not limited to): - Definition of interpretability in healthcare - Identification of out-of-distribution/failure prediction - Uncertainty quantification for medical decision making - Designing quantification and measurement of interpretability in healthcare - Robustness and generalization of Medical ML systems - Graph reasoning in healthcare - Developing interpretable ML methods aligned with clinical reasoning - Embedding medical knowledge in ML systems - Application of interpretation methods to disease understanding - Auditing and debugging algorithms for diagnosis system - Visualization of explanation for model prediction - Personalized vs. population-level interpretation methods
76
icml2023_llw
## Localized Learning Workshop Despite being widely used, global end-to-end learning has several key limitations. It requires centralized computation, making it feasible only on a single device or a carefully synchronized cluster. This restricts its use on unreliable or resource-constrained devices, such as commodity hardware clusters or edge computing networks. As the model size increases, synchronized training across devices will impact all types of parallelism. Global learning also requires a large memory footprint, which is costly and limits the learning capability of single devices. Moreover, end-to-end learning updates have high latency, which may prevent their use in real-time applications such as learning on streaming video. Finally, global backpropagation is thought to be biologically implausible, as biological synapses update in a local and asynchronous manner. To overcome these limitations, this workshop will delve into the fundamentals of localized learning, which is broadly defined as any training method that updates model parts through non-global objectives. ## Topics Relevant topics include but are not limited to: - Forward-forward learning - Greedy training - Decoupled or early-exit training - Iterative layer-wise learning - Asynchronous model update methods - Biologically plausible methods for local learning - Localized learning on edge devices - Self-learning or data-dependent functions - New applications of localized learning
77
icml2023_mfpl
## The Many Facets of Preference-based Learning Learning from human preferences, or preference-based learning, has been critical to major advances in AI and machine learning. It is based on the fact that humans are more reliable at providing relative feedback compared to numerical values. Therefore, preference feedback is usually easier to collect and less biased. A recent success story that showed the dormant potential of learning from preference feedback is fine-tuning of large language models with a reward function learned from human feedback and reinforcement learning to follow instructions in a dialogue context. There are other areas where preference-based learning yielded promising results, such as guided image generation, robotics and self-driving vehicles, games, collaborative filtering, simulated continuous control tasks, optimization and search problems, and healthcare. Despite these ground-breaking successes, the most exciting opportunities still lie ahead of us. The broad objective of this workshop is twofold: 1. Bring together different communities where preference-based learning has played a major role. 2. Connect theory to practice by identifying real-world systems that can benefit from incorporating preference feedback. The aim of this workshop is to create a suitable platform for sharing techniques and ideas, learning from each other, and potentially posing new and groundbreaking research questions. ## Topics We cordially invite researchers who feel addressed by the theme of the workshop to submit their latest works to our workshop. Topics include but are not limited to: - Collaborative filtering - Control theory - Convex optimization - Dueling and preference-based bandit - Econometrics and assortment selection - Fairness - Game theory, equilibria, and multiplayer games - Marketing and revenue management - Multi-objective optimization - Ranking aggregation - Recommender systems - Reinforcement learning - Robotics - Search engine optimization - Social choice theory
78
icml2023_ncw
# Neural Compression: From Information Theory to Applications ## Overview The workshop solicits original research in the intersection of machine learning, data/model compression, and more broadly information theory. Machine learning and compression have been described as “two sides of the same coin”, and the exponential amount of data being generated in diverse domains underscores the need for improved compression as well as efficient AI systems. Leveraging deep generative models, recent machine learning-based methods have set new benchmarks for compressing images, videos, and audio. Despite these advances, many open problems remain, such as computational efficiency, performance guarantees, and channel simulation. Parallel advances in large-scale foundation models further spurred research in efficient AI techniques such as model compression and distillation. This workshop aims to bring together researchers in machine learning, data/model compression, and information theory. It will focus on enhancing compression techniques, accelerating large model training and inference, exploring theoretical limits, and integrating information-theoretic principles to improve learning and generalization. By bridging disciplines, we seek to catalyze the next generation of scalable, efficient information-processing systems. ## Topics Topics of interest include, but are not limited to, - Improvements in learning-based techniques for compressing data, model weights, implicit/learned representations of signals, and emerging data modalities. - Accelerating training and inference for large foundation models, potentially in distributed settings. - Theoretical understanding of neural compression methods, including but not limited to fundamental information-theoretic limits, perceptual/realism metrics, distributed compression and compression without quantization. - Understanding/improving learning and generalization via compression and information-theoretic principles. - Information-theoretic aspects of unsupervised learning and representation learning.
79
icml2023_pac
# PAC-Bayes Meets Interactive Learning Workshop ## Scope Interactive learning encompasses online learning, continual learning, active learning, bandits, reinforcement learning, and other settings where an algorithm must learn while interacting with a continual stream of data. Such problems often involve exploration-exploitation dilemmas, which can be elegantly handled with probabilistic and Bayesian methods. Deep interactive learning methods leveraging neural networks are typically used when the setting involves rich observations, such as images. As a result, both probabilistic and deep interactive learning methods are growing in popularity. However, acquiring observations in an interactive fashion with the environment can be costly. There is therefore great interest in understanding when sample-efficient learning with probabilistic and deep interactive learning can be expected or guaranteed. Within statistical learning theory, PAC-Bayesian theory is designed for the analysis of probabilistic learning methods. It has recently been shown to be well-suited for the analysis of deep learning methods. This workshop aims to bring together researchers from the broad Bayesian and interactive learning communities in order to foster the emergence of new ideas that could contribute to both theoretical and empirical advancement of PAC-Bayesian theory in interactive learning settings. ## Topics We are interested in (but not limited to) the following topics: - Explaining the success of existing interactive learning algorithms with PAC-Bayesian theory - PAC-Bayesian analysis of exploration-exploitation trade-offs - PAC-Bayes bounds under distribution shift - PAC-Bayes bounds under adversarial corruptions - Development of practically useful interactive learning algorithms using PAC-Bayesian theory.
80
icml2023_scis
## Workshop on Spurious Correlations, Invariance and Stability The workshop brings together domain experts and researchers to facilitate discussions and forge collaborations on problems with spurious correlations, and instability of machine learning models. Models built without accounting for spurious correlations often break when deployed in the wild, despite excellent performance on benchmarks. In particular, models can learn to rely on apparently unnatural or irrelevant features. Such examples abound in recent literature: 1. In detecting lung disease from chest X-rays, models rely on the type of scanner and marks that technicians use in specific hospitals, instead of the physiological signals of the disease. 2. In Natural Language Processing, when reasoning whether a premise entails a hypothesis, models rely on the number of shared words rather than the subject’s relationship with the object. 3. In precision medicine, polygenic risk scores for diseases like diabetes and breast cancer rely on genes prevalent mainly in people of European ancestry, and are not as accurate in other populations. Extensive work on resolving problems akin to spurious correlations has sprung up in several communities. These include works on invariance constraints and graph-based methods rooted in Causality, methods to avoid discrimination of compromised subgroups in Algorithmic Fairness, and stress testing procedures to discover unexpected model dependencies in reliable ML. Yet there is little consensus on best practices, useful formal frameworks, rigorous evaluations of models, and fruitful avenues for the future. We invite work addressing all aspects of ML in the presence of spurious correlations, from formalization to deployment. ## Solicited Topics We invite submissions that address discovery, learning, and unification in the presence of spurious correlations. We welcome a wide range of topics, including but not limited to: - Methods for discovering and diagnosing spurious correlations. - Evaluation and stress tests of model stability. - Impacts of different dataset shifts when learning exploits a shortcut/spurious correlation. - Learning robust models in the presence of spurious correlations. - Exploring relationships b/n methods from causal ML, algorithmic fairness, and OOD generalization. Furthermore, we strongly encourage practitioners to submit examples of failure modes due to spurious correlations in real-world scenarios. We are particularly interested in submissions that can create new opportunities for collaboration, and motivate foundational research that is impactful in real-world applications.
81
icml2023_sods
# Sampling and Optimization in Discrete Space ## Overview Sampling and optimization in discrete space are classical and important problems that arise in many applications, including physics, combinatorial optimization, compiler optimization, and many modern machine learning models like large language models and protein models. Examples include searching device placement strategies for distributed neural network training, or sampling from language model posteriors with arbitrary conditioning. ## Challenges However, discrete space sampling/optimization is hard in general compared to continuous space. Although some independence structures can be leveraged for some special problems, in general discrete space sampling/optimization is slow. Recently, there have been new research trends in efficient discrete sampling and optimization via - Leveraging the gradient information of objectives: the gradient based MCMC algorithms generalize the Langevin dynamics to discrete space. - Embedding into a continuous space: The embedding methods first map the discrete space to continuous space, then apply an efficient sampling algorithm in continuous space, then map the new sample back into discrete space. - Other effective proposal strategies in discrete space: for example the Stein variational method and GFlowNet. These methods have improved efficiency in many domains. With simulated annealing, they can also serve as optimization algorithms and demonstrate superior performance on many combinatorial optimization problems compared to existing learning-based methods. However, they might still be limited when applied to black-box objectives, or problems involving long-range and high-order correlations like in modern language models. ## Scope and Topics Given this new direction of research and potential applications, we are organizing this workshop with the goals including, but not limited to: - Syncing up on the latest research progress in discrete sampling and optimization. - Discussing limitations of the current methods and brainstorm the new algorithm paradigms. - Connecting to applications in domains such as language/protein modeling, physics simulation, and bio/chemical engineering, where improved sampling/optimization in discrete space would help, and explore the current gap between the application requirements and the capability of existing methods. We hope this workshop will be a great opportunity for presenting and discussing new algorithms and applications with researchers and practitioners within or outside the domain of discrete sampling/optimization.
82
icml2023_spigm
## Structured Probabilistic Inference & Generative Modeling Probabilistic inference addresses the problem of amortization, sampling, and integration of complex quantities from graphical models, while generative modeling captures the underlying probability distributions of a dataset. Despite promising results, probabilistic methods face challenges when applied to highly structured data. We aim to bring experts from diverse backgrounds together, from both academia and industry, to discuss the applications and challenges of probabilistic methods, emphasizing challenges in encoding domain knowledge in these settings. We hope to provide a platform that fosters collaboration and discussion in the field of probabilistic methods. ## Scope Relevant topics to this workshop include but are not limited to: - Inference and generating methods for graphs, time series, text, video, and other structured modalities. - Unsupervised representation learning of high dimensional structured data. - Scaling and accelerating inference and generative models on structured data. - Uncertainty quantification in AI systems. - Applications and practical implementations of existing methods to areas in science. - Empirical analysis comparing different architectures for a given data modality and application.
83
icml2023_synsml
# Workshop on the Synergy of Scientific and Machine Learning Modeling ## Introduction The Synergy of Scientific and Machine Learning Modeling Workshop (“SynS & ML”) is an interdisciplinary forum for researchers and practitioners interested in the challenges of combining scientific and machine-learning models. The goal of the workshop is to gather together machine learning researchers eager to include scientific models into their pipelines, domain experts working on augmenting their scientific models with machine learning, and researchers looking for opportunities to incorporate ML in widely-used scientific models. The power of machine learning (ML), its ability to build models by leveraging real-world data is also a big limitation; the quality and quantity of training data bound the validity domain of ML models. On the other hand, expert models are designed from first principles or experiences and labelled scientific if validated on curated real-world data, often even harvested for this specific purpose, as advised by the scientific method since Galileo. Expert models only describe idealized versions of the world which may hinder their deployment for important tasks such as accurate forecasting or parameter inference. This workshop focuses on **the combination of two modelling paradigms: scientific and ML modelling.** Sometimes called hybrid learning or grey-box modelling, this combination should (1) unlock new applications for expert models, and (2) leverage the data compressed within scientific models to improve the quality of modern ML models. In this spirit, the workshop focuses on the symbiosis between these two complementary modelling approaches; it aims to be a “rendezvous” between the involved communities, spanning sub-fields of science, engineering and health, and encompassing ML, to allow them to present their respective problems and solutions and foster new collaborations. ## Topics The workshop invites submissions on (but not limited to) the following topics: - **Real-world application** of the combination of scientific and ML modelling: How can scientific models capitalise on ML to exploit raw data to broaden their applicability in the real world? - Astronomy - Biology - Chemistry - Geology - Robotics - Sub-domains of engineering - etc. - **Methodological and theoretical study** on the combination of scientific and ML modelling: How can ML take advantage of the large amounts of data and human efforts hidden behind scientific models? - Model class and neural architectures - Learning algorithms - Data preparation - Theoretical analysis
84
icml2023_tagml
## Topology, Algebra, and Geometry in Machine Learning Workshop Much of the data that is fueling current rapid advances in machine learning is high dimensional, structurally complex, and strongly nonlinear. This poses challenges for researcher intuition when they ask (i) how and why current algorithms work and (ii) what tools will lead to the next big break-though. Mathematicians working in topology, algebra, and geometry have more than a hundred years worth of finely-developed machinery whose purpose is to give structure to, help build intuition about, and generally better understand spaces and structures beyond those that we can naturally understand. Following on the success of the first TAG-ML workshop in 2022, this workshop will showcase work which brings methods from topology, algebra, and geometry and uses them to help answer challenging questions in machine learning. Topics include mathematical machine learning, explainability, training schemes, novel algorithms, performance metrics, and performance guarantees. **Topics Covered** - Geometric Machine/Deep Learning - MathematicalMachine Learning - Novel Algorithms - Equivariant Models - Explainability - Interpretability - Robustness - Performance - Metrics - Performance Guarantees - Training Methods
85
icml2023_tom
# Workshop on Theory of Mind in Communicating Agents ## Motivation Theory of Mind (ToM) is the ability to reason about the minds of other agents. The main theme of our workshop is the computational modeling of ToM, with a special focus on the role of natural language in such modeling. Specifically, ToM 2025 pays attention to cognitive foundations and theories of ToM, the acquisition and relationship between language and ToM, leveraging ToM to improve and explain NLP and ML models, and using ToM for positive social impact. This workshop intends to promote the community of researchers that are interested in improving the ability of intelligent agents to reason about others' mental states. Our proposed program provides a space to discuss pathways for understanding and applying ToM in psycholinguistics, pragmatics, human value alignment, social good, model explainability, and many other areas of NLP. ## Topics **Potential topics include:** - Leveraging ToM for Machine Learning Applications (e.g., NLP, Robotics, CV) - Cognitive Science Perspectives of ToM - ToM for HCI / Human-AI collaboration - Surveys or replication of existing work - Social Impacts of ToM
86
icml2024_FoRLaC
# Foundations of RL and Control: Connections and New Perspectives ## About Despite rapid advances in machine learning, solving large-scale stochastic dynamic programming problems remains a significant challenge. The combination of neural networks with RL has opened new avenues for algorithm design, but the lack of theoretical guarantees of these approaches hinders their applicability to high-stake problems traditionally addressed using control theory, such as online supply chain optimization, industrial automation, and adaptive transportation systems. This workshop focuses on recent advances in developing a learning theory of decision (control) systems, that builds on techniques and concepts from two communities that have had limited interactions despite their shared target: reinforcement learning and control theory. This workshop aims to reinforce the connection between reinforcement learning and control theory by bringing together researchers from both fields. In particular, we invite contributions on all fundamental and theoretical aspects, with a special emphasis on topics that connect both fields and provide new perspectives. Contributions that bridge theory and applications are also welcome. We believe that significant progress in tackling large-scale applications can only be achieved through collaborative efforts and a mutual understanding of each field’s strengths and approaches. Our workshop is dedicated to fostering dialogue and collaboration, paving the way for breakthroughs in complex dynamic programming challenges and interactive systems. ## Topics We invite researchers to submit papers on the topics listed below. Technical topics include, but are not limited to, the following aspects: - Performance measures and guarantees: Stability, robustness, regret bounds, sample-complexity, stochastic vs non-stochastic approaches, MDPs etc. - Fundamental assumptions: Linear and non-linear systems, excitation, stability, etc. - Fundamental limits: Results that mathematically characterize the difficulty of a given problem, statistical, information theoretic and computational lower bounds. - Computational aspects: Efficient algorithms, computational hardness, approximations, etc. - Topology: Continuous-state and action spaces vs discrete spaces; Discrete and continuous time analysis. - Models: Bandits, Markov Decision Processes, Linear and nonlinear control, partial observability, POMDPs, partial monitoring, etc - Data Acquisition & Exploration: Exploration-exploitation trade-offs, pure-exploration, experimental design. - Offline vs. online: Open-loop and closed loop control, offline and online reinforcement learning and hybrid approaches. - Planning and learned search: Dynamic programming, tree search and planning algorithms. - Target applications: Formalization of applications such as autonomous vehicles, robots, industrial processes, recommender systems, internet routing, hardware optimization, hyper-parameter optimization and AutoML … - Benchmarks: Evaluation of algorithms and theoretical results on a suitable collection of problems.
87
icml2024_accml
# Workshop on Efficient and Accessible Foundation Models for Biological Discovery ## About There is a growing gap between machine learning (ML) research on biology-inspired problems and the actual broad-based use of ML in the lab or the clinic. This gap is especially pressing in the context of foundation models and other large ML models. Accessibility and efficiency concerns limit the adoption of these models by biologists and clinicians. Large ML models may require extensive GPU clusters to train, while most biological labs only have access to much more modest computational resources. The usability of these models for non-expert users is also a concern, as is the need to iteratively adapt these models based on lab discoveries. This workshop seeks to bring ML and biomedical researchers together to identify interdisciplinary approaches to design and apply large, complex ML models for biomedical discovery. We invite researchers from academia and industry to submit original papers to bridge the accessibility and efficiency gap between ML research and wet lab use. All accepted papers will be invited to present posters at the workshop, and a few will be invited to give individual spotlight presentations. ## Topics We are seeking original submissions in topics including, but not limited to: - Parameter-, memory-, and compute-efficient foundation models for biological data, including model compression and quantization techniques - Algorithms for training efficient generative models in biology - Efficient fine-tuning and adaptation of biological foundation models - Accessible cloud/web-based methods for foundational biological discovery - Knowledge distillation and transfer learning across biological contexts - Lab in the loop: iterative approaches to refine ML models based on initial experimental results - Hypothesis-driven machine learning in biology and uncertainty modeling in biological foundation models
88
icml2024_ai4math
# AI for Math ## Workshop Summary Mathematical reasoning is one of the most advanced forms of human intelligence. Humans develop formal languages for rigorously describing mathematical problems and deriving mathematical knowledge. The machine learning community has endeavored to develop neural models with mathematical reasoning capabilities as humans. On the other hand, a shared vision in the community is that the models collaborate with humans for mathematical discoveries. The goal of this workshop is to bring together researchers working on various domains to discuss the progress and the future of applying AI technologies to mathematics. As mathematics is fundamental for almost all modern sciences (including computer science), a vast range of related topics are also within our scope. To this end, this workshop focuses on several crucial yet underexplored problems. Specifically, we are expecting attendants from various backgrounds, institutions, and disciplines to discuss areas related to the following: - **Autoformalization and the reversed auto-informalization**: How can we develop methods that improve the precision of the autoformalization process from natural language proof to formal proof, and as a dual process describing a formal proof in natural language? - **Automated theorem proving**: How do build consistent theorem proving? How do we relieve or solve the intermediate step errors in proving? - **Automated theorem generation**: Can neural models generate new and practically valid theorems? How do we take full advantage of such generated new theorems? - **Code augmentation and auxiliary for mathematical reasoning**: How can the handy and plentiful code data facilitate the models to conduct mathematical reasoning? - **Formal verification and code generation**: How can progress made in AI for Math help or be directly deployed to the field of formal verification? What are the common technical difficulties? How can AI systems be able to write provably correct code, given any (formal) specifications? In addition to the problem areas above, we also welcome research work related to the following topics: - **Measurement**: How do we measure autoformalization? - **Reasoning in related areas**: program synthesis, software verification, neurosymbolic reasoning, logical reasoning. - **Applications**: Applying mathematical reasoning techniques to sciences, finance, education, etc.
89
icml2024_ai4science
# AI for Science: Scaling in AI for Scientific Discovery ## About Dramatic developments in AI have led to its increasing adoption in science as a means to model complex phenomena, generate hypotheses, design experiments, collect and interpret large datasets, and gain new insights that might not have been possible using traditional scientific methods alone. The main goal of this series of workshop is to discover synergy across a variety of scientific fields, encourage interdisciplinary discussions, and enhance the flow of knowledge between AI and various scientific communities. Throughout history, bridging seemly different fields has brought overarching benefits, with notable examples: entropy in thermodynamics and information theory, neuroscience and AI, and algorithms inspired by discoveries in science (e.g. genetic algorithm, simulated annealing and diffusion-based generative models). In the current era, successes of AI methods in different fields of science have alluded to the general effectiveness of common themes: large simulated datasets, enforcing problem symmetries, and foundation model architectures. Our mission is to bring more scientists to attend ICML to share different perspectives on the use of AI, and to illuminate exciting research directions for AI researchers. ## Interested Areas We welcome submissions from all AI for Science areas, but we concentrate some of our talks and panels on scaling in AI for Science. - How scaling can help AI for Science? - How scaling can be done in AI for Science? - How scaling change the Pareto frontier of methodology, interpretability and discovery? - What is the limitation of scaling and what is the cure for it?
90
icml2024_alignRL
# Aligning Reinforcement Learning Experimentalists and Theorists ## Overview Recent progress in reinforcement learning (RL) has powered breakthroughs in various real-world problems, gathering considerable attention and investment. However, it has also exposed a significant gap between theoretical and experimental developments. RL theory has grown significantly in the past two decades. Research has characterized the inherent difficulty of various settings, and has designed a wide variety of algorithms to reach optimal performances. Furthermore, a huge leap has been made in understanding how to handle large state spaces using function approximation techniques, identifying key structural properties that enable efficient learning. However, despite theoretical guarantees, applying RL algorithms to complex problems faces challenges. Theoretical algorithms often focus on simplified settings, making them hard to apply to real-world complexities. Furthermore, optimizing for worst-case scenarios, which include unlikely situations, can lead to algorithms that perform poorly on practical tasks. Yet, while specialized algorithms offer empirical success, they might not translate to other problems due to their specific design, and the reliance on heuristics and engineering fixes further widens the gap between theory and practice. With this workshop, we aim to bring theorists and experimentalists together to drive future research in RL. ## Desiderata While theorists and experimentalists share a common interest in advancing the field, their research objectives, methodologies, and challenges sometimes diverge significantly. This workshop aims to bridge this gap and to shed light on recent developments and synergies in both communities. Specifically, we aim to promote the following long-term desiderata. - Communicate existing results. As the field evolves rapidly, theorists and experimentalists often find themselves immersed in their domain, occasionally overlooking valuable insights and challenges encountered by the other. Participants will have the opportunity to present key findings, best practices, and lessons learned, emphasizing the importance of cross-disciplinary awareness. This proactive sharing of knowledge will help create a collaborative atmosphere that promotes a deeper appreciation for the existing works and encourages fruitful discussions on the current state and future directions in RL. - Identify new problem classes of practical interest. We aim to emphasize new structures and perspectives that have not been widely investigated yet. Experimentalists can present algorithms that work surprisingly well but lack theoretical understanding. Equally important are the cases where algorithms fail despite expectations. This collaboration will ensure that theoretical progress addresses the most compelling issues faced in practice and that advancements in empirical research will get the attention of theorists, creating a mutually beneficial exchange of ideas.
91
icml2024_autorl
# Automated Reinforcement Learning: Exploring Meta-Learning, AutoML, and LLMs ## About The past few years has seen a surge of interest in reinforcement learning, , with breakthrough successes of applying RL in games, robotics, chemistry, logistics, nuclear fusion and more. These headlines, however, blur the picture of what remains a brittle technology, with many successes relying on heavily engineered solutions. Indeed, several recent works have demonstrated that RL algorithms are brittle to seemingly mundane design choices . Thus, it is often a significant challenge to effectively apply RL in practice, especially on novel problems, limiting its potential impact and narrowing its accessibility. In this workshop, we want to bring together different communities working on solving these problems. A variety of distinct sub-communities spanning RL, Meta-Learning and AutoML have been working on making RL work out-of-the-box in arbitrary settings - this is the AutoRL setting. Recently, with the emergence of LLMs and their in-context learning abilities, they have significantly impacted all these communities. There are LLM agents tackling traditional RL tasks as well as few-shot RL agents increasing efficiency and generalization that are also trying to automate RL. LLMs have also been influencing AutoML directly with papers such as OptFormer . However, there is currently little crossover between these communities. As such, we want to create the space to connect them and cross-pollinate ideas automating RL. We believe closer connections between these communities will ultimately lead to faster and more focused progress on AutoRL and an in-person workshop is the ideal way to allow for greater interaction between them. Through a mixture of diverse expert talks and opportunity for conversation, we hope to emphasize the many facets of current AutoRL approaches and where collaboration across fields is possible. ## Focused Area The workshop will focus on novel and unpublished work including, but not limited to, the areas of: - LLMs for reinforcement learning - In-context reinforcement learning - Meta-reinforcement learning - RL algorithm discovery - Fairness & interpretability via AutoRL - Curricula and open-endedness in RL - AutoML for reinforcement learning - Reinforcement learning for LLMs - NAS for deep reinforcement learning - Theoretical guarantees for AutoRL - Feature- & Hyperparameter importance for RL algorithms - Demos of AutoRL systems - Hyperparameter agnostic RL algorithms
92
icml2024_dmlr
# Workshop on Data-Centric Machine Learning Research ## Scope Large-scale foundation models are revolutionizing machine learning, particularly in vision and language domains. While model architecture received significant attention in the past, recent focus has shifted towards the importance of data quality, size, and diversity, and provenance. This workshop aims to highlight cutting-edge advancements in data-centric approaches for large-scale foundation models in new domains, in addition to language and vision, and engage the vibrant interdisciplinary community of researchers, practitioners, and engineers who tackle practical data challenges related to foundation models. By featuring innovative research and facilitating collaboration, it aims to bridge the gap between dataset-centric methodologies and the development of robust, versatile foundation models that are able to work in and across a variety of domains in service of humanity. ## Topic We solicit papers covering the following topic: - Data sources for large-scale datasets: - Construction of datasets from large quantities of unlabeled/uncurated data - Model-assisted dataset construction - Quality signals for large-scale datasets - Datasets for evaluation - Datasets for specific applications. - Impact of dataset drifts in large-scale models - Ethical considerations for and governance of large-scale datasets - Data curation and HCI - Benchmarks such as DataPerf, DynaBench, and DataComp
93
icml2024_fminwild
# Workshop on Foundation Models in the Wild ## Abstract In the era of AI-driven transformations, foundation models (FMs), like large-scale language and vision models, have become pivotal in various applications, from natural language processing to computer vision. These models, with their immense capabilities, reshape the future of scientific research and the broader human society, but also introduce challenges in their in-the-wild/real-world deployments. The Workshop on FMs in the wild delves into the urgent need for these models to be useful when deployed in our societies. The significance of this topic cannot be overstated, as the real-world implications of these models impact everything from daily information access to critical decision-making in fields like medicine and finance. Stakeholders, from developers to end-users, care deeply about this because the successful integration of FMs into in-the-wild frameworks necessitates a careful consideration of adaptivity, reliability and efficiency. Some of the fundamental questions that this workshop aims to address are: 1. Real-world Adaptation: In practical applications, how can we leverage the comprehensive knowledge in FMs to adapt them for specific domains, such as drug discovery, education, or clinical health? 2. Reliability and Responsibility: How can foundation models work reliably outside their training distribution? And how can we address issues like hallucination and privacy? 3. Safety, Ethics, and Fairness in Society: How do we ensure that the deployment of FMs preserving safety, ethics, and fairness within society, safeguarding against biases and unethical use? 4. Practical Limitations in Deployment: How can FMs tackle challenges in practical applications, such as system constraints, computational costs, data acquisition barriers, response time demands?
94
icml2024_gram
# Workshop on Geometry-grounded Representation Learning and Generative Modeling ## Motivation By recognizing that nearly all data is rooted in our physical world, and thus inherently grounded in geometry and physics, it becomes evident that representation learning should preserve this grounding in order to remain meaningful. For example, preserving group transformation laws and symmetries through equivariant layers is crucial in domains such as computational physics, chemistry, robotics, and medical imaging, and leads to effective and generalizable architectures and improved data efficiency. Similarly, in generative models applied to non-Euclidean data spaces, maintaining the manifold structure is essential in order to obtain meaningful samples. Therefore, this workshop focuses on the principle of grounding in geometry. ## Topics We solicit submissions that present theoretical research, methodologies, applications, insightful analysis, and even open problems, within the following topics (list not exhaustive): - Structure-preserving learning - Preservation of symmetries; E.g., through equivariant operators. - Dynamical systems on manifolds; Representation learning and generative modeling using ordinary, stochastic, and differential equations (ODEs, SDEs, PDEs) on manifolds. - Computing with geometric representations; Such as the processing of multi-vectors using geometric algebra, steerable vectors using Clebsch-Gordan products, and hyperbolic features using Fréchet means. - Structure-inducing learning - Self-supervised learning; E.g., learning to embed data in geometric latent spaces through (geodesic) distance-based similarity metrics. - Geometric priors; E.g., soft constraints on model weights. - Physics-Informed Neural Networks; E.g., inducing the structure of established physical and geometric laws into neural networks through dedicated losses. - Generative modeling and density estimation - Geometric latent variable models; I.e., the use of latent variables that live in a manifold. - New Methods; And adaptations of methods capable of: - Generating geometric objects; E.g., generating atomic point clouds or shapes. - Generating fields over manifolds; E.g., generating vector fields or spherical signals. - Grounding in theory - Theoretical frameworks; Unifying analyses and formulations that provide a generalizing perspective on deep learning paradigms. - Open problems; Identifying and addressing unresolved questions and challenges that lie at the intersection of geometry and learning
95
icml2024_hdlearning
# High-dimensional Learning Dynamics 2024: The Emergence of Structure and Reasoning ## About The unprecedented scale and complexity of modern neural networks have revealed emergent patterns in learning dynamics and scaling behaviors. Recent advances in analyzing high-dimensional systems have uncovered fundamental relationships between model size, data requirements, and computational resources while highlighting the intricate nature of optimization landscapes. This understanding has led to deeper insights into the architecture design, regularization, and the principles governing neural learning at scale. ## Areas The HiLD workshop seeks to spur research and collaboration around: * Developing analyzable models and dynamics to explain observed deep neural network phenomena; * Competition and dependencies among structures and heuristics, e.g., simplicity bias or learning staircase functions; * Creating mathematical frameworks for scaling limits of neural network dynamics as width and depth grow; * Provably explaining the role of the optimization algorithm, hyper-parameter choices, and neural network architectural choices on training/test dynamics; * Relating optimizer design and loss landscape geometry to implicit regularization, inductive bias, and generalization; * High-dimensionality, where intuitions from low-dimensional geometry tend to lead to inaccurate (and often misleading) properties of the machine learning models on large real-world datasets; * Connecting model architectures and data distributions to generalization, memorization, and forgetting.
96
icml2024_humans_algs_society
# Workshop on Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact ## About With the widespread adoption of machine learning in social technologies, there are increasingly complex interactions between humans, algorithmic decision-makers, and society at large. For instance, algorithmic decisions influence the information and opportunities that are available to individuals, the news they read, the job listings they are matched to, the credit lines they receive, and the social circle they form. Such decisions can therefore affect societal outcomes such as social mobility, mental health, polarization etc. At the same time, humans also influence algorithmic decision-makers, for instance, by expressing their preferences through observed behaviors which might be inconsistent or strategic. To understand long-term individual and societal outcomes resulting from these interactions, and to develop algorithms that mitigate undesired outcomes, it has therefore become increasingly important to model these complex interactions as a whole. The purpose of this workshop is to bring together researchers from both academia and industry working on the full spectrum of modeling interactions between AI systems, humans, and society, from theory to practice. We will invite speakers and solicit contributed papers and posters covering the various facets of these interactions. We are targeting different communities/fields such as machine learning, network science, social systems, algorithmic game theory, economics. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges and possible solutions. ## Topics We invite submissions that are related to the interplay of humans, algorithmic decision-making and society with a special focus on modeling interactions and their impact. In particular, we encourage submissions on the following topics: - Feedback loops between human and algorithmic decisions, and their long-term impacts - Strategic behavior and its impact on algorithmic decision-making - Models for human utility/preferences in the presence of non-rational behavior - Generative and foundation models for interpretable human behavior - Emergent social phenomena and complex systems - Modeling societal outcomes through multi-agent models, mean-field games, etc. - Fairness and algorithmic approaches to mitigate disparate impact
97
icml2024_icl
## About In-context learning (ICL) is an emerging capability of large-scale models, including large language models (LLMs) like GPT-3, to acquire new capabilities directly from the context of an input example without separate training or fine-tuning, enabling these models to adapt rapidly to new tasks, datasets, and domains. This workshop brings together diverse perspectives on this new paradigm to assess progress, synthesize best practices, and chart open problems. Core topics will include architectural and other inductive biases enabling in-context skill acquisition, and reliable evaluation of ICL in application domains including reinforcement learning, representation learning, and safe and reliable machine learning. We invite submissions to the ICL 2024 workshop, focusing on the development of new architectures, algorithms, theoretical analysis, empirical studies, and applications of In-Context Learning (ICL). Submissions must present original research that has not been previously published. ## Topics Specific topics of interest include, but are not limited to: - architectures, training paradigms, and inductive biases that enable or improve ICL; - theoretical analyses and guarantees for ICL methods; - empirical evaluation of the performance of ICL on interpretability, controllability, and safety considerations for ICL systems; - similarities and differences between ICL in large-scale language modeling systems and learned algorithms in other domains; - the relationship between ICL and few-shot learning, meta-learning and automated machine learning (AutoML).
98
icml2024_lcfm
# Workshop on Long-Context Foundation Models ## Overview Many challenging tasks for foundation models require synthesizing information over thousands to millions of individual pieces of data, which may take many forms, including images, text, audio, genomes, etc. Our workshop aims to convene researchers to address challenges in long-context foundation models, fostering discussions, developments, deployments, evaluation, and understanding of long-context foundation models across various AI disciplines. ## Topics The topics of this workshop include (but not limited to): - New modeling, training, and data strategies. - Efficiency techniques for (long-context) foundation models. - Evaluation and understanding of long-context models. - Retrieval-augmented foundation models. - Interdisciplinary applications of LCFMs.
99
icml2024_llmandcog
# Workshop on LLMs and Cognition ## About Large Language Models (LLMs) have undoubtedly taken center stage in the AI revolution, showing impressive performance in a wide variety of tasks, including machine translation, standardized tests, and conversational chatbots. It is even more impressive to uncover that these models exhibit unpredictable capabilities in solving unseen tasks. This demonstration of emergent abilities, often credited to the scale of the parameters and data size in the case of LLMs, is being considered as the footprint of intelligence. The goal of this workshop is to assess and understand the position of current LLMs' abilities in the landscape of intelligent systems, with a strong focus on cognitive abilities. ## Topics By bringing in experts from different scientific disciplines, such as AI/ML, neuroscience, cognitive science, and psychology, we aim to discuss topics that include but not limited to: - Where do LLMs stand in terms of performance on cognitive tasks, such as reasoning, navigation, planning, and theory of mind? - What are the fundamental limits of language models with respect to cognitive abilities? - How do LLMs fine-tuned on specific tasks end-to-end compare to augmented LLMs coupled with external modules? - What are the similarities and differences between mechanistic interpretability approaches in AI and in neuroscience? What do they tell us about similarities and differences between LLMs and human brains? - How can we improve existing benchmarks and evaluation methods to rigorously assess cognitive abilities in LLMs? - Can multimodal and multiagent approaches address some of current limits of LLMs to cognitive tasks?