- Meta released Llama 2 as a mostly open-source AI model in July.
- Llama 2 has become really popular among AI developers.
- Replit CEO Amjad Masad said Mark Zuckerberg took a big risk making the model open-source.
Since Meta released Llama 2 as a (mostly) open-source project in July, the AI model has become a huge hit. So much so, that some experts are worried this powerful tool might be misused by bad actors.
Open-source software is free for anyone to use, inspect, update and redistribute back to the community. This approach really took hold in the 1990s and is now the backbone of many tech services.
While Llama 2 isn’t fully open-source, it offers developers an incredibly powerful model that they can use with way more flexibility than the closed models created by OpenAI, Google and other major players in the generative AI field.
The AI community has embraced the opportunity, giving Meta CEO Mark Zuckerberg his next potentially huge platform. The model had been downloaded 30 million times, the company said in late September.
This wouldn’t have happened unless Zuckerberg was willing to take a big risk on Llama 2 possibly being used for nefarious purposes, according to a former top Facebook engineer. One example: Llama 2 can give a detailed walkthrough of how to turn anthrax into a biological weapon.
“It takes a certain amount of guts to release an open-source language model, especially with political heat that Meta’s getting from that,” said Amjad Masad during a recent episode of the No Priors podcast. “And Zuck has balls, right?”
Masad is CEO and founder of developer platform Replit. Before that, he spent almost 3 years at Facebook where he helped create React Native and other popular software development tools.
During the No Priors podcast, Masad said he’s been surprised that Meta is the only major tech company so far to go the open-source route for AI models.
He compared this to Facebook’s Open Compute project, which designed data center hardware and made that available for anyone to use and contribute to.
“It was a huge success. And what I told Zuck at the time was like, ‘Hey, why didn’t you do that for LLMs?'” Masad said, referring to large language models. “He just kind of nodded his head.”
Now that Llama 2 is out in the world, Masad highlighted a key difference between the Open Compute project and the Llama 2 approach.
“The AI safety angle makes it a little toxic for a lot of companies to touch. They wouldn’t want to be associated with something that spews something that is toxic,” he said, according to a transcript of the podcast.
If Meta suddenly decided to stop supporting Llama models as a mostly open-source project, Masad said that would be a blow to the AI community and he worries that other companies would be too timid to take Meta’s place.
“Will there be another player that will emerge?” Masad added. “The problem a lot is that, as you know, there isn’t a lot of guts in the industry.” A Meta spokesperson didn’t respond to a request for comment on Thursday.
There’s the latest Falcon open-source AI model. That is a possible alternative. But that came out of the United Arab Emirates, rather than a tech company.
Read the full article here