AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsLightricks Open-Sources AI Video Model LTX-2, Challenges Sora and Veo
Lightricks Open-Sources AI Video Model LTX-2, Challenges Sora and Veo
AI

Lightricks Open-Sources AI Video Model LTX-2, Challenges Sora and Veo

•January 11, 2026
0
THE DECODER
THE DECODER•Jan 11, 2026

Companies Mentioned

Lightricks

Lightricks

NVIDIA

NVIDIA

NVDA

Google

Google

GOOG

OpenAI

OpenAI

Character AI

Character AI

Alibaba Group

Alibaba Group

BABA

GitHub

GitHub

Hugging Face

Hugging Face

Why It Matters

The open‑source release democratizes high‑fidelity audiovisual generation, forcing proprietary players to accelerate innovation and giving creators direct control over AI‑augmented content.

Key Takeaways

  • •LTX‑2 produces 20‑second 4K video with stereo audio.
  • •Asymmetric 14B video / 5B audio transformer captures bidirectional cues.
  • •Inference runs 18× faster than Wan2.2‑14B on H100.
  • •Open‑source release includes LoRA adapters and multi‑GPU tools.
  • •Human studies show quality comparable to Veo 3 and Sora 2.

Pulse Analysis

The rapid rise of generative AI has pushed video synthesis from experimental labs into mainstream workflows, yet most solutions still treat audio and visuals as separate stages. Lightricks’ LTX‑2 confronts this limitation with an asymmetric dual‑stream transformer that processes video and audio in parallel, using modality‑specific variational autoencoders and cross‑attention to bind sound to visual events. By tapping the full depth of a multilingual encoder and introducing "thinking tokens," the model captures nuanced lip‑sync and environmental acoustics that earlier pipelines missed.

Performance is a decisive differentiator in the crowded AI video market. Running on Nvidia’s H100, LTX‑2 completes a 720p, 121‑frame step in just over a second, outpacing Alibaba’s Wan2.2‑14B by a factor of eighteen. The model also extends video length to 20 seconds, surpassing Google’s Veo 3 and OpenAI’s Sora 2, while human preference tests place it on par with those proprietary systems. These speed and quality gains lower the barrier for real‑time content creation, making AI‑driven video viable for advertising, e‑learning, and interactive media.

Beyond technical merits, Lightricks’ decision to open‑source LTX‑2 reshapes the industry’s business dynamics. By providing distilled weights, LoRA adapters, and a modular training stack compatible with consumer‑grade RTX GPUs, the company empowers developers to run the model locally, sidestepping costly API fees and data‑privacy concerns. This ethical stance—keeping AI augmentation under the creator’s control—could accelerate community‑driven improvements and set new standards for transparency in generative media. As more firms adopt open‑source audiovisual models, the competitive pressure on closed‑API providers is likely to intensify, spurring faster innovation across the sector.

Lightricks open-sources AI video model LTX-2, challenges Sora and Veo

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...