Morpheus exposes the potential allocative harms of popular pretrained NLP models by simulating inflectional variation. We propose adversarial fine-tuning for mitigating the effects of training only on error-free Standard English data.
The 58th Association for Computational Linguistics (ACL) Conference kicked off this week and runs from Sunday, Jul 5 to Friday, Jul 10 in a fully virtual format. ACL is the premier conference of…
Word embeddings inherit strong gender bias in data which can be further amplified by downstream models. We propose to purify word embeddings against corpus regularities such as word frequency prior to inferring and removing the gender subspace, which significantly improves the debiasing performance.
In our study, we show how a language model, trained simply to predict a masked (hidden) amino acid in a protein sequence, recovers high-level structural and functional properties of proteins.
We show that deep neural models can describe common sense physics in a valid and sufficient way that is also generalizable. Our ESPRIT framework is trained on a new dataset with physics simulations and descriptions that we collected and have open-sourced.
Summary We investigate NVIDIA’s Triton (TensorRT) Inference Server as a way of hosting Transformer Language Models. The blog is roughly divided into two parts: (i) instructions for setting up your own inference server,…
In our study, we demonstrate that an artificial intelligence (AI) model can learn the language of biology in order to generate proteins in a controllable fashion.