Meta Platforms said on Tuesday that it would give experts access to parts of a new “human-like” artificial intelligence model that can analyse and finish unfinished pictures more correctly than current models.
The company said that the I-JEPA model doesn’t just look at close pixels to fill in missing parts of pictures. Instead, it uses what it knows about the world as a whole.
The top AI scientist at Meta, Yann LeCun, likes that method because it uses thinking like a person would. This helps the technology avoid mistakes that are common in AI-generated pictures, like hands with extra fingers, it said.
Meta is the company that owns both Facebook and Instagram. Its in-house research lab publishes a lot of open-sourced AI research. Mark Zuckerberg, Facebook’s CEO, has said that sharing the models that Meta’s experts have made can help the company innovate, find safety holes, and cut costs.
In April, he told investors, “For us, it’s a lot better if the industry standardises on the basic tools that we use, so that we can benefit from the improvements that others make.”
Executives at the company have ignored warnings from others in the industry about how dangerous the technology could be. For example, last month, top executives from OpenAI, DeepMind, Microsoft, and Google refused to sign a statement that compared the risks of the technology to pandemics and wars.
Lecun is thought of as one of the “godfathers of AI.” He has spoken out against “AI dooms day” and claimed that AI systems should have safety checks built in.
Meta is also starting to add creative AI features to its consumer products, like ad tools that can make picture backgrounds and an Instagram product that can change user photos based on text prompts.