Latest Lightricks model release enables fully directed, 60-second AI video
Lightricks updates LTXV to generate 60-second AI videos with real-time control. Open-source model available free on Hugging Face.

Open-source video generation model developer Lightricks is celebrating another industry first, with its latest update dramatically extending the duration of AI-generated clips, from just 8 seconds to as long as 60 seconds.
The new capability, available for all versions of its flagship LTX Video (LTXV) model, also integrates an improved autoregressive video engine, enabling AI-generated videos to be streamed in real-time, with the creator using multiple prompts to customize and control the video output.
It's a significant breakthrough for generative AI video creation, giving creators the ability to generate much lengthier video sequences, while refining them with enhanced control. Once the first second of video is generated, LTXV's algorithm starts creating the rest of the content in real-time, with the user having full control over the ongoing scene development.
Dynamic, iterative model rollouts
The LTXV models were already among the most powerful AI video generation tools around, even prior to today's update. Available on open-source platforms in addition to native integrations with Lightricks' LTX Studio platform, their codebase and weights are freely available to the AI research community for experimentation, in stark contrast to rival, proprietary models such as OpenAI's Sora and Runway AI.
The most advanced of Lightricks' models is LTXV-13B, which launched in May and introduced a novel multiscale rendering capability that enables creators to tinker with their videos, adding more detail and color in a step-by-step fashion, similar to the "layered" technique used by cartoonists and artists in traditional filmmaking to progressively enhance different elements of their videos.
The 13B model, which uses 13 billion parameters without compromising on processing speed, was trained using content licensed via partnerships with Getty and Shutterstock, for increased visual quality and ethics.
Shortly after LTXV-13B was launched, Lightricks followed up with a "distilled" version that was tweaked for even greater speed and efficiency, enabling high-quality video creation in just four to eight steps. That version also added enhanced support for Low-Rank Adaptation or LoRA, a technique used to adapt large language models for specific tasks without retraining the entire model, giving users the ability to quickly fine-tune it to their needs.
Lightricks subsequently added native LoRA controls to enable better control of human motion, structure and object boundaries, allowing users to influence the finer details of their videos in ways that other generative AI video tools cannot.
Real-time video direction
With today's update, Lightricks is now giving creators, researchers and software developers the opportunity to enrich their AI-generated videos in real time, no matter if they're using the 13B version or the original two-billion parameter version of LTXV that remains the most suitable for mobile platforms.
By utilizing real-time autoregressive sequence conditioning, LTVX generates videos in separate chunks of frames, with each previous chunk informing the creation of future frames. According to Lightricks, this enables smoother continuity, similar to how each new paragraph written by a novelist is informed by the last.
The autoregressive technique is designed to be used with AI-generated video that's created and streamed over the web in real-time, and supports continuous inputs from the user so they can directly manipulate their dynamically-evolving content. It also supports the real-time application of the newly-added LoRA controls, according to an official announcement from Lightricks, so users can influence the depth of the video and character poses as it's being streamed.
Lightricks co-founder and CTO Yaron Inger said the update means that, for the first time, "AI video isn't just prompted, but truly directed," enabling real storytelling instead of just visual tricks.
Lightricks says this new capability, combined with longer-form videos, opens the door to all kinds of new generative storytelling applications, such as adaptive educational and advertising content, augmented reality visuals that can be synchronized with live performances, and more dynamic player-generated cutscenes based on gameplay data. It also supports real-time motion capture, enabling new possibilities for interactive platforms.
Zeev Farbman, co-founder and CEO of Lightricks, said the ability to generate 60-second videos while simultaneously refining their outputs will unlock a new era for generative media. "LTXV is unique in its ability to create longer scenes while maintaining full control of the extended sequences, which enables coherent storytelling with visual and semantic consistency, transforming AI video from a demo or just a random clip, into a true medium with creative intent," he said.
Lowered barriers for hardware and licensing
The LTXV models are also among the most cost-effective video generation algorithms around, with the beefy 13B parameter version able to run efficiently on a single H100 graphics processing unit, or even a consumer-grade laptop. That's in sharp contrast to resource-hungry alternatives such as Sora, which require clusters of GPUs that can only be obtained in the cloud.
The latest version of LTXV is available to download immediately on Hugging Face and GitHub, complete with its open weights, and it can be used freely without any licensing requirements by upstart developers, academics and other generative AI video enthusiasts.