This year marks the 34th annual conference on Neural Information Processing
Systems (NeurIPS [https://neurips.cc/]) reimagined for the first time ever in a
fully virtual format. NeurIPS is a leading conference in the area of machine
learning and neural information processing systems in their biological,
technological, mathematical, and theoretical aspects. Neural information
processing is a field which benefits from a combined view of biological,
physical, mathematical, and computational sciences.
Salesforce Research is proud to announce a total of 5 accepted papers and 8
accepted workshop papers from our team of leading researchers including 1 oral
paper, 1 spotlight paper, and 3 poster papers.
Salesforce Research offers researchers and PhD students the opportunity to work
with large-scale industrial data that leads to state-of-the-art research which
is usually not accessible in an academic setting, freedom to continue their own
research focus and connect it with real CRM applications, flexibility to choose
research topics that fit their interests, and the opportunity to attend
conferences with our researchers to showcase accepted papers.
Three of the contributing authors are past or current Salesforce Research
interns, shedding a light on the kind of impactful projects an internship at
Salesforce Research provides PhD candidates. Those interns are: Chien Sheng Wu,
Junwen Bai, and Minshuo Chen. Hear what our interns have to say about our
program and learn more here [https://einstein.ai/careers].
The accepted papers below will be presented by members of our team through
prerecorded talks and slides during the main conference on December 6th-12th,
2020. Additionally, our team will be hosting 11 roundtable events throughout the
conference to give attendees a chance to meet our recruiters and researchers in
a smaller group setting as well as a fun AI Trivia Challenge event on December
8th, 2020.
We’re also excited to announce our continued commitment to diversity through new
partnerships with the LatinX in AI Workshop
[https://www.latinxinai.org/neurips-2020] and the Black in AI Workshop
[https://blackinai2020.vercel.app/cpfbai2020] on December 7th, 2020. We’ll be
partnering with these organizations through the 2020/2021 conference cycle in
the Latinx in AI Mentorship Program and the Black in AI Graduate Mentorship
Program.
Questions and comments about this work can be discussed using the online NeurIPS
forum, on Twitter [https://twitter.com/SFResearch], or by emailing us at
salesforceresearch@salesforce.com. We look forward to sharing some of our
exciting new research with you next week!
Our Publications at NeurIPS 2020
ACCEPTED PAPERS
Theory-Inspired Path-Regularized Differential Network Architecture Search
[https://arxiv.org/abs/2006.16537]
Pan Zhou, Caiming Xiong, Richard Socher and Steven Hoi
Oral paper, NeurIPS2020
A Simple Language Model for Task-Oriented Dialogue
[https://arxiv.org/abs/2005.00796]
Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz and Richard Socher
Spotlight, NeurIPS2020
Towards Theoretically Understanding Why Sgd Generalizes Better Than Adam in
Deep
Learning [https://arxiv.org/abs/2010.05627]
Pan Zhou, Jiashi Feng, Chao Ma, Caiming Xiong, Steven Hoi and Weinan E
Poster paper, NeurIPS2020
Online Structured Meta-learning [https://arxiv.org/abs/2010.11545]
Huaxiu Yao, Yingbo Zhou, Mehrdad Mahdavi, Zhenhui (Jessie) Li, Richard Socher
and Caiming Xiong
Poster paper, NeurIPS2020
Towards Understanding Hierarchical Learning: Benefits of Neural Representations
[https://arxiv.org/abs/2006.13436]
Minshuo Chen, Yu Bai, Jason Lee, Tuo Zhao, Huan Wang, Caiming Xiong and Richard
Socher
Poster paper, NeurIPS2020
ACCEPTED WORKSHOP PAPERS
How Important is the Train-Validation Split in Meta-Learning?
[http://arxiv.org/abs/2010.05843]
Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham Kakade, Huan Wang,
Caiming Xiong
NeurIPS 2020 workshop on Meta-Learning.
Task Similarity Aware Meta Learning: Theory-inspired Improvement on MAML
Pan Zhou [https://arxiv.org/search/cs?searchtype=author&query=Zhou%2C+P],
Yingtian Zou, Xiaotong Yuan, Jiashi Feng, Caiming Xiong
[https://arxiv.org/search/cs?searchtype=author&query=Xiong%2C+C], Steven C.H.
Hoi [https://arxiv.org/search/cs?searchtype=author&query=Hoi%2C+S+C]
NeurIPS 2020 workshop on Meta-Learning.
Representation Learning for Sequence Data with Deep Autoencoding Predictive
Components [https://arxiv.org/abs/2010.03135]
Junwen Bai [https://arxiv.org/search/cs?searchtype=author&query=Bai%2C+J],
Weiran Wang [https://arxiv.org/search/cs?searchtype=author&query=Wang%2C+W],
Yingbo Zhou [https://arxiv.org/search/cs?searchtype=author&query=Zhou%2C+Y],
Caiming Xiong
NeurIPS 2020 workshop on Self-Supervised Learning for Speech and Audio
Processing
DIME: An Information-Theoretic Difficulty Measure for AI Datasets
[https://openreview.net/forum?id=kvqPFy0hbF¬eId=5H2fhlSQwz1]
Peiliang Zhang, Huan Wang, Nikhil Naik, Caiming Xiong, Richard Socher
NeurIPS 2020 Workshop: Deep Learning through Information Geometry
Prototypical Contrastive Learning of Unsupervised Representations
[https://arxiv.org/abs/2005.04966]
Junnan Li [https://arxiv.org/search/cs?searchtype=author&query=Li%2C+J], Pan
Zhou [https://arxiv.org/search/cs?searchtype=author&query=Zhou%2C+P], Caiming
Xiong [https://arxiv.org/search/cs?searchtype=author&query=Xiong%2C+C], Richard
Socher [https://arxiv.org/search/cs?searchtype=author&query=Socher%2C+R],
Steven
C.H. Hoi [https://arxiv.org/search/cs?searchtype=author&query=Hoi%2C+S+C]
NeurIPS 2020 workshop: Self-Supervised Learning — Theory and Practice
Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
Stanislaw Jastrzebski, Devansh Arpit, Oliver Åstrand, Giancarlo Kerg, Huan Wang,
Caiming Xiong, Richard Socher, Kyunghyun Cho, Krzysztof Geras
NeurIPS 2020 workshop on Optimization for Machine Learning.
Copyspace: Where to Write on Images?
Jessica Lundin, Michael Sollami, Alan Ross, Brian Lonsdorf, David Woodward, Owen
Schoppe, Sönke Rohde
NeurIPS 2020 workshop on Machine Learning for Creativity and Design
SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency
[https://arxiv.org/abs/2010.10038]
Sameer Dharur, Purva Tendulkar, Dhruv Batra, Devi Parikh, Ramprasaath R.
Selvaraju
NeurIPS 2020 workshop on Interpretable Inductive Biases and Physically
Structured Learning
HOSTED WORKSHOP
ML for Economic Policy Workshop [http://www.mlforeconomicpolicy.com/]
Friday, December 11, 2020
Start Time: 12:00pm EST
If you are interested in learning more about our research program, other
published works, and full-time and internship opportunities please head to our
website [https://einstein.ai/].