Introducing Multi Motion Brush: A New Era of AI Video Generation


Runway, an applied AI research company, has unveiled a groundbreaking feature for its Gen-2 AI video generation model: the Multi Motion Brush. This innovative tool is set to revolutionize the way creators approach video generation by allowing them to control multiple areas of their videos with independent motion.

The Evolution of Motion Brush

The Multi Motion Brush is an evolution of the original Motion Brush feature, which was introduced in November 2023. The first iteration of the Motion Brush enabled the addition of a single type of motion to videos. Now, with the Multi Motion Brush, creators can apply motion to up to five unique subjects or areas within a video, significantly enhancing the level of creative control available.

Gen-2: A Foundation for Creativity

Runway’s Gen-2 model represents a major leap forward from its predecessor, Gen-1. Initially, it allowed for the generation of clips up to four seconds long, which was later extended to 18 seconds. It introduced capabilities such as text, video, and image-based generation, laying the groundwork for features like the Multi Motion Brush. The Gen-2 model is part of Runway’s suite of over 30 AI Magic Tools designed to empower creative video production.

Independent Motion Control

The Multi Motion Brush and Camera Control work independently, offering users the flexibility to experiment with both tools to achieve the desired effects in their video generations. This level of control is unprecedented in the AI video market, where other products typically add motion to the entire video rather than to multiple independent areas.

Availability and Access

The Multi Motion Brush was initially previewed through Runway’s Creative Partners Program and is now available to all users of Gen-2. For those interested in exploring this feature, it is accessible on Runway’s website, where users can try out the tool and integrate it into their creative workflow.


Runway’s introduction of the Multi Motion Brush marks a significant milestone in the AI video market. By providing creators with the ability to animate multiple areas of a video independently, Runway is not only advancing the capabilities of AI in art and entertainment but also empowering human creativity with new tools that redefine the boundaries of video generation.

Chinonso Anyaehie

Chinonso Anyaehie

Chinonso Anyaehie is a leading voice exploring the societal impacts of artificial intelligence and emerging technologies. As founder of the popular technology blog DecentraPress, Chinonso Anyaehie complex innovations like blockchain, robotics, and AI, transforming dense research into accessible, engaging stories.

With over 7 years of experience as a science and tech writer, Chinonso Anyaehie provides thoughtful analysis on how bleeding-edge breakthroughs are reshaping our world. He delves into the progress and pitfalls of exponential technologies through an accessible, human-centric lens.

In addition to running DecentraPress, Chinonso Anyaehie is a frequent contributor to major tech publications and conferences. He is committed to fostering nuanced conversations on how we ethically steer emerging advances for the benefit of humanity.

Leave a Reply

Your email address will not be published. Required fields are marked *