By now, you’ve probably heard of text-to-image prompt-based AI such as DALL-E2, Midjourney, and even Google‘s own version. These programmes pretty much do what they sound like – they take a text prompt or set of instructions, and turn them into an image. It’s usually either photo-realistic or it’s an illustration. Tests have shown just how powerful this technology can be, and because it’s AI, it means that it’s constantly evolving and getting better and better. Well now, text-to-video AI has arrived, and it could be a complete game changer.
It’s not available yet, of course. We merely have a fleeting glimpse of what might be possible via a Twitter post from Runway. Runway is (according to its Twitter profile) a browser-based “professional video editing powered by artificial intelligence”.
In the example video, we see a man playing tennis on a clay court. A set of written prompts then changes the parameters. First, we see a tennis court on a sandy beach. The video background then changes so that the man appears to be playing tennis on a beach.
Next, the prompt says “a tennis court on the moon” and we see the background change accordingly. The prompts go through a number of iterations, all the time the man in the foreground is unaffected. Commenters have expressed their disbelief at how good this seems to be, with one saying that “the moon would have less gravity so the ball wouldn’t bounce that way!” Realism aside, however, this is a pretty big deal, particularly in the world of special effects and CGI. If you can merely type in a set of prompts to change aspects of a video, well, the mind boggles really. And not least in terms of deep fakes and potential nefarious uses. But it looks as though this technology is firmly here to stay, and will be continuing to develop at an alarming rate. It will be interesting to see where this goes in the next year or so, no maybe even just the next few months!