Skip to Content

Salesforce Research at NeurIPS 2023

[Image: Yurii / Adobe Stock]

Conference Overview

Next week, the Thirty-seventh annual Conference on Neural Information Processing Systems (NeurIPS) will be held in New Orleans, Louisiana from Sunday, December 10th, through Saturday, December 16th. NeurIPS will include invited talks, demonstrations, oral and poster presentations of accepted papers. NeurIPS 2023 will be held again at the New Orleans Ernest N. Morial Convention Center. 

Salesforce AI Research Sponsorship and Events

Conference Sponsorship: Salesforce AI Research is proud to support NeurIPS 2023 as a Platinum level sponsor. Our team of researchers and recruiters will be showcasing demos, discussing our many open career opportunities, and chatting with attendees at our booth (#809) all week. Our booth hours are as follows:

Mon, Dec 11    9:00am – 8:30pm **GC Welcome Reception 6:30 PM

Tue, Dec 12    9:00am – 5:00pm

Wed, Dec 13   9:00am – 5:00pm

Thur, Dec 14    9:00am -2:30pm

LatinX in AI: We’re excited to continue our partnership with the LatinX in AI Community. We will be participating in the LatinX in AI (LXAI) Workshop at NeurIPS 2023 on Monday, December 11th, from 8:30am – 4:30pm. 

Women in Machine Learning: Salesforce Research is excited to also be participating in the Women in Machine Learning Workshop, taking place on Monday, December 11th, from 8:00am – 4:30pm. 

Networking Event: Salesforce AI Research will host an invite-only Networking event on Wednesday, December 13th, from 6:30pm – 9:00pm at the Bower Bar in New Orleans.

Salesforce AI Research Publications at NeurIPS 2023

Salesforce Research is pleased to announce a total of 11 accepted Oral and Poster papers from our team of leading researchers.

Our accepted authors will present their work at NeurIPS throughout the main conference; with specific times, dates, and locations indicated below. We look forward to sharing some of our exciting new research with you!

Salesforce Researchers are shown in bold in the publication descriptions below.

Oral Presentation: Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection

Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, Song Mei

We study the in-context learning (ICL) capability of transformers. We give a comprehensive theory for transformers to do ICL, and in addition show that transformers can learn in context like statisticians — A single transformer can select different algorithms for different data at hand.

Oral session: Dec 13 (Wednesday) 4:00 pm – 4:15pm CST, Room R06-R09 (level 2)

Poster: Dec 13 (Wednesday) 5:00 pm – 7:00 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #526

What can a Single Attention Layer Learn? A Study Through the Random Features Lens

Tianyu Guo, Hengyu Fu, Yu Bai, Song Mei

We study the expressivity of a single random-feature attention layer, and derive concrete learning results from finite samples.

Poster: Dec 14 (Thursday) 5:00 pm – 7:00 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #1002

Efficient RL with Impaired Observability: Learning to Act with Delayed and Missing State Observations

Minshuo Chen, Yu Bai, H. Vincent Poor, Mengdi Wang

We design provably sample-efficient reinforcement learning algorithms in a new feedback model where some state observations can be delayed or missing in each episode.

Poster: Dec 13 (Wednesday) 5:00 pm – 7:00 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #1909

BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing

Dongxu Li, Junnan Li, Steven Hoi

We propose a multi-modal encoder-based architecture for efficiently training and generation of personalized visual content. The model is generic to support various generative applications, including controlled generation, editing and stylization.

Poster: Dec 14 (Thursday) 10:45 am CST — 12:45 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #129

InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning

Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi

We systematically validate the effectiveness of visual instruction tuning by extending BLIP-2 models. Our model achieved superior metrics across multiple competitive multimodal benchmarks.

Poster: Dec 14 (Thursday) 5:00 pm CST — 7:00 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #711

UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild

Can Qin, Shu Zhang, Ning Yu, Yihao Feng, Xinyi Yang, Yingbo Zhou, Huan Wang, Juan Carlos Niebles, Caiming Xiong, Silvio Savarese, Stefano Ermon, Yun Fu, Ran Xu

We introduce UniControl, a new generative foundation model that unifies language prompts for context control and 9 many-shot + 3 zero-shot conditions for pixel-level structure control.

Poster: Dec 14 (Thursday) 5:00 pm CST — 7:00 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #724

ConRad: Image Constrained Radiance Fields for 3D Generation from a Single Image

Blog: https://blog.salesforceairesearch.com/detail-preserving-image-to-3d-generation/

Senthil Purushwalkam, Nikhil Naik

We present ConRad, a NERF representation that facilitates construction of 3D models from a single image while accurately preserving the details of the depicted object.

Poster: Dec 12 (Tuesday) 10:45 am CST — 12:45 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #210

Preference-grounded Token-level Guidance for Language Model Fine-tuning

Shentao Yang, Shujian Zhang, Congying Xia, Yihao Feng, Caiming Xiong, Mingyuan Zhou

We introduce a new training paradigm that can iterate between grounding the sequence-level preference into token-level training guidance and improving the LLM finetuning with the learned guidance.

Poster: Dec 12 (Tuesday) 5:15 pm CST — 7:15 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #326

FAMO: Fast Adaptive Multitask Optimization

Bo Liu, Yihao Feng, Qiang Liu, Peter Stone

We introduce Fast Adaptive Multitask Optimization (FAMO), a dynamic weighting method that decreases task losses in a balanced way, demonstrating comparative performance on both supervised and reinforcement learning multitasking benchmarks.

Poster: Dec 14 (Thursday) 5 p.m. CST — 7 p.m. CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #1221

LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning

Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu, Peter Stone

We introduce a benchmark for knowledge transferring of lifelong robot learning.

Poster: Dec 12 (Tuesday) 10:45 am CST — 12:45 pm CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #1414

Temporally Disentangled Representation Learning under Unknown Nonstationarity

Xiangchen Song, Weiran Yao, Yewen Fan, Xinshuai Dong, Guangyi Chen, Juan Carlos Niebles, Eric Xing, Kun Zhang

We introduce NCTRL, a principled estimation framework, to provably recover time-delayed latent causal variables and identify their relations from measured sequential data under unknown non-stationary conditions.

Time: Dec 13 (Wednesday) 5 p.m. CST — 7 p.m. CST

Poster Location: Great Hall & Hall B1+B2 (level 1) #1007

Career Opportunities with Salesforce AI Research

Summer 2024 Intern – AI Research – Palo Alto

Summer 2024 Intern – Tableau Research – Wshington – Seattle

Summer 2024 Intern – AI Research – Singapore

As a research intern, you will work with a team of research scientists and engineers on a project that ideally leads to a submission to a top-tier conference.

Applied Scientist – Salesforce AI Research (Palo Alto)

Research Scientist – Salesforce AI Research (Palo Alto)

Research Scientist/Senior Research Scientist – Salesforce AI Research Singapore (Singapore)

Machine Learning Engineer – AI Research (Palo Alto)

Ethical AI Engineer, Confidence Scores & Citations (Palo Alto)

Explore More

To learn more about these and other research projects, please visit our website at salesforceairesearch.com.

Get the latest articles in your inbox.