Equal contribution between Erik Nijkamp and Hiroaki Hayashi. PaperCodeTweet Abstract The family of Salesforce CodeGen models is growing with CodeGen2.5 – a small, but mighty model! While there has been a recent trend…
TLDR We trained a series of 7B LLMs named XGen-7B with standard dense attention on up to 8K sequence length for up to 1.5T tokens. We also fine tune the models on public-domain…
Links: Research Paper, Github Can you imagine a machine writing an app for you, just by telling it what you want? As futuristic as this scenario sounds, it’s actually here today. Salesforce AI…