The Download: Smart glasses, and tiny AI models


In case you missed the memo, we are barreling toward the next big consumer device category: smart glasses. At its developer conference last week, Meta introduced a positively mind-blowing new set of augmented reality (AR) glasses dubbed Orion. Snap also unveiled its new Snap Spectacles last week. Back in June at Google IO, that company teased a pair, and Apple is rumored to be working on its own model as well.

After years of promise, AR specs are at last A Thing. But what’s really interesting about all this isn’t AR at all. It’s AI. Smart glasses enable you to seamlessly interact with AI as you go about your day. I think that’s going to be a lot more useful than viewing digital objects in physical spaces. Put more simply: it’s not about the visual effects, it’s about the brains.Read the full story.

—Mat Honan

This story is from the very first edition of The Debrief, MIT Technology Review’s new newsletter. It provides a weekly take on the tech news that really matters, and links to stories we love—as well as the occasional recommendation.

Sign up to receive it in your inbox every Friday, and to get ahead with the real story behind the biggest news in tech.

Why bigger is not always better in AI 

In AI research, everyone seems to think that bigger is better. The idea is that more data, more computing power, and more parameters will lead to models that are more powerful.

But with scale come a slew of problems, such as invasive data-gathering practices and child sexual abuse material in data sets. To top it off, bigger models also have a far bigger carbon footprint, because they require more energy to run. 

It doesn’t have to be like this. Researchers at the Allen Institute for Artificial Intelligence have built an open-source family of models which achieve impressive performance with a fraction of the resources used to build state-of-the-art models. Read more about why it’s a big achievement.