This week, we welcomed the latest addition to the OLMo family, OLMoE — the first 100% open and good MoE LLM. Efficient enough to run locally, OLMoE's data, code, logs, experiments and analysis are all open and available to review.
We're thrilled for our teams who were recognized at the ACL 2024 Conference! OLMo received the Best Theme Paper, Dolma and AppWorld received the Best Resource Paper, and "Political Compass or Spinning Arrow?" was honored with an Outstanding Paper Award.
AI reflects human biases — like covert racism. In work recently published in Nature, our team shows how, despite efforts to remove overt racial bias, LLMs generate covertly racist decisions about people based on their dialect.
At ACL 2024, our team presented Digital Socrates, an interpretable explanation evaluation tool that can automatically characterize the explanation capabilities of modern LLMs.
Our team attended this year's DEF CON to cohost the red teaming of OLMo at the AI Village. Christopher Fiorelli, one of our technical program managers, wrote a brief retrospective of the event from his point of view.