B2B Growth News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

B2B Growth Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
B2B GrowthNewsTwelveLabs Launches Its Most Powerful Video Understanding Model, Marengo 3.0 on TwelveLabs and Amazon Bedrock
TwelveLabs Launches Its Most Powerful Video Understanding Model, Marengo 3.0 on TwelveLabs and Amazon Bedrock
B2B Growth

TwelveLabs Launches Its Most Powerful Video Understanding Model, Marengo 3.0 on TwelveLabs and Amazon Bedrock

•December 2, 2025
0
MarTech Series
MarTech Series•Dec 2, 2025

Why It Matters

By delivering multimodal video intelligence at cloud‑scale, Marengo 3.0 accelerates content discovery and automation for enterprises, reshaping how video data is monetized and managed. Its Bedrock integration lowers the barrier for developers to embed sophisticated video analytics into existing applications.

Key Takeaways

  • •Marengo 3.0 processes video at petabyte scale
  • •Supports simultaneous visual, audio, and text embeddings
  • •Integrated with Amazon Bedrock for instant API access
  • •Enterprise video search runs with sub‑second latency
  • •Labeling costs cut up to 70%

Pulse Analysis

The explosion of video content across social, entertainment, and enterprise domains has outpaced traditional indexing methods, prompting a shift toward AI‑driven foundations. Large‑scale video models that can parse frames, transcribe speech, and extract semantic meaning are becoming essential for businesses seeking to unlock hidden insights. TwelveLabs’ Marengo 3.0 arrives at this inflection point, promising a unified representation of multimodal signals that rivals human perception while maintaining the throughput required for petabyte‑level libraries.

Marengo 3.0 distinguishes itself through a three‑branch architecture that jointly learns visual, auditory, and textual embeddings, enabling it to “read” subtitles, “hear” dialogues, and “watch” actions in a single pass. The model is hosted on TwelveLabs’ managed service and exposed via Amazon Bedrock, allowing developers to call the API without managing infrastructure. Benchmarks released by the company show sub‑second query latency on billion‑frame corpora and a 70% reduction in manual labeling effort, translating into faster time‑to‑value for video‑intensive workflows such as compliance monitoring, content recommendation, and automated metadata generation.

For enterprises, the strategic impact is twofold: operational efficiency and new revenue streams. Companies can now automate video search, generate searchable transcripts, and detect brand‑safe content at scale, reducing reliance on costly human review teams. Meanwhile, the Bedrock integration opens the model to a broader ecosystem of AWS customers, fostering rapid adoption across sectors ranging from media streaming to e‑learning. As competitors race to build comparable video foundations, TwelveLabs’ early mover advantage and cloud partnership position Marengo 3.0 as a cornerstone technology for the next generation of video‑centric AI applications.

TwelveLabs Launches its Most Powerful Video Understanding Model, Marengo 3.0 on TwelveLabs and Amazon Bedrock

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...