In a demo, Mark Zuckerberg showed how the company creates rudimentary videos based on a text description. A counterpart to Dall-E, but for video.
Earlier this year, Meta, the parent company of Facebook, already came up with Make-A-Scene where, just like with Dall-E or Midjourney, you can describe something with a text, after which the AI makes an image of it. Now it goes one step further with Make-A-Video, where the same thing happens, but with an AI video.
Currently, it is not yet an application you can test yourself. On Facebook, Zuckerberg shares short clips of a teddy bear painting itself, a robot on a surfboard, and a spaceship landing. Graphically it looks a bit like images from the 80s because the resolution is still quite low, but that it was generated entirely by AI is amazing.
Generating images based on AI also comes with a lot of sensitivities. For example, there is a chance that a system will adopt certain stereotypes by portraying a nurse as a woman and a doctor as a man because existing images guide such systems. At the same time, there is also a risk of abuse because it is straightforward to manipulate the image with such technology.
Simultaneously with the announcement, Meta published a research paper with more details about her work. It also says that such AI systems rely on public datasets and that it is open to future feedback to refine their research project.