Introducing OLMoE, plus highlights from ACL 2024, a paper in Nature, and more.
View in browser

Subscribe

September 2024

Ai2 newsletter

The logo for OLMoE.

Top story

Our 100% open Mixture-of-Experts model is here!

This week, we welcomed the latest addition to the OLMo family, OLMoE — the first 100% open and good MoE LLM. Efficient enough to run locally, OLMoE's data, code, logs, experiments and analysis are all open and available to review.

Learn more ➞

Ai2 researchers and collaborators accept the Best Resource Paper for Dolma.

ACL 2024 paper recognition round-up

We're thrilled for our teams who were recognized at the ACL 2024 Conference! OLMo received the Best Theme Paper, Dolma and AppWorld received the Best Resource Paper, and "Political Compass or Spinning Arrow?" was honored with an Outstanding Paper Award.

Read our papers ➞

New paper featured in Nature

AI reflects human biases — like covert racism. In work recently published in Nature, our team shows how, despite efforts to remove overt racial bias, LLMs generate covertly racist decisions about people based on their dialect.

Read the paper ➞

A summary of the benefits of Digital Socrates, including its focus on quality and localization of interpretable feedback.

New evaluation tool Digital Socrates

At ACL 2024, our team presented Digital Socrates, an interpretable explanation evaluation tool that can automatically characterize the explanation capabilities of modern LLMs.

Read the blog ➞
The payout leaderboard for DEF CON 32.

Ai2 at DEF CON 32

Our team attended this year's DEF CON to cohost the red teaming of OLMo at the AI Village. Christopher Fiorelli, one of our technical program managers, wrote a brief retrospective of the event from his point of view.

Read it here ➞
Ai2_logo_pink_RGB

Work with us

Ai2 newsletter archive

X
LinkedIn
YouTube
Website

Ai2, 2157 N Northlake Way, Suite 110, Seattle, WA 98103, USA

Unsubscribe Manage preferences