TLDR Learning from human preferences, specifically Reinforcement Learning from Human Feedback (RLHF) has been a key recent component in the development of large language models such as ChatGPT or Llama2. Up until recently,…
UniControl is accepted to NeurIPS’23. Other authors include Yingbo Zhou, Huan Wang, Juan Carlos Niebles, Caiming Xiong, Silvio Savarese, Stefano Ermon, and Yun Fu. Is it possible for a single model to master…
Conference Overview Next week, the Thirty-seventh annual Conference on Neural Information Processing Systems (NeurIPS) will be held in New Orleans, Louisiana from Sunday, December 10th, through Saturday, December 16th. NeurIPS will include invited…
Background Graphic layout designs serve as the foundation of communication between media designers and their target audience. They play a pivotal role in organizing various visual elements, including rendered text, logos, product images,…
TL;DR: With CodeChain, a pretrained large language model (LLM) can solve challenging coding problems by integrating modularity in generation samples and self-improve by employing a chain of self-revisions on representative sub-modules. CodeChain can…
TL;DR: We adapted our protein language model ProGen to optimize antibodies that bind to a protein called “CD40L”, a critical target for autoimmune disorders. We tested our AI designed antibodies in the laboratory…
Other authors include: Can Qin, Stefano Ermon, Yun Fu GlueGen was accepted by ICCV. In the rapidly advancing field of text-to-image synthesis, the remarkable progress in generating lifelike images from textual prompts has…