What to Know About the Open Versus Closed Software Debate


Few debates have raged longer and more contentiously in the computing industry than one: Is “open source” better than “closed” when it comes to software development?

That debate has been revived as companies like Google, Meta, OpenAI and Microsoft have diverged on how to compete for supremacy in artificial intelligence systems. Some are choosing a closed model while others espouse an open approach.

Here’s what to know.

Source code makes up the underlying building blocks of the apps you use. Developers can write tens of thousands of lines of source code to create programs that will run on a computer.

Open-source software is any such computer code that can be freely distributed, copied or altered to a developer’s own ends. The nonprofit Open Source Initiative, an industry organization, sets other stipulations and standards for what software is considered open source, but it is largely a matter of the code’s being free and open for anyone to use and improve.

Some of the most famous software systems are open source, such as Linux, the operating system that Google’s Android mobile system was built on top of. Well-known open-source products include Firefox, the free-to-download web browser created by the Mozilla Foundation.

Tech companies like Google, OpenAI and Anthropic have spent billions of dollars creating “closed,” or proprietary, A.I. systems. People who are not employed by those companies cannot see or tinker with their underlying source code, nor can the customers who pay to use it.

For a long time, this was not the norm. Most of these companies open sourced their A.I. research so that other technologists could study and improve upon the work. But when tech executives began to realize that the pursuit of more advanced A.I. systems could be worth billions, they began walling off their research.

Tech companies maintain that this is for the good of humanity because these systems are powerful enough to potentially cause catastrophic societal damage if put into the wrong hands. Critics say the companies simply want to keep the tech from hobbyists and competitors.

Meta has taken a different approach. Mark Zuckerberg, Meta’s chief executive, decided to open-source his company’s large language model, a program that learns skills by analyzing vast amounts of digital text culled from the internet. Mr. Zuckerberg’s decision to open-source Meta’s model, LLaMA, allows any developers to download and use it to build their own chatbots and other services.

In a recent podcast interview, Mr. Zuckerberg said no single organization should have “some really superintelligent capability that isn’t broadly shared.”

It depends on whom you ask.

For many technologists and those who espouse the hardcore hacker culture, open source is the way to go. World-changing software tools should be freely distributed, they say, so that anyone can use them to build interesting and exciting technology.

Others believe that A.I. has advanced so rapidly that it should be closely held by the makers of these systems for safekeeping against misuse. Developing these systems also costs enormous amounts of time and money, and closed models should be paid for, they say.

The debate has already spread beyond Silicon Valley and computing enthusiasts. Lawmakers in the European Union and in Washington have held meetings and taken steps toward frameworks for regulating A.I., including the risks and rewards of open-source A.I. models.