- Slice of Technology
- Posts
- Twelve Labs Scales Multimodal AI Video Models on AWS
Twelve Labs Scales Multimodal AI Video Models on AWS
Key Highlights:
Twelve Labs leverages AWS to scale foundation models for video search and insights.
Applications span media, sports, and entertainment, enabling semantic search and highlight creation.
Uses Amazon SageMaker HyperPod to train models across data formats like video, speech, and text.
Source: Business Wire
Notable Quotes:
“Nearly 80% of the world’s data is in video, yet most of it is unsearchable. We are now able to address this challenge, surfacing highly contextual videos to bring experiences to life.”
“AWS has helped Twelve Labs build the tools needed to better understand and rapidly produce more relevant content.”
Our Take:
Twelve Labs’ partnership with AWS marks a major step forward in the evolution of AI-driven video intelligence. By integrating multimodal AI into video content, industries such as sports, media, and entertainment can revolutionize how they catalog, analyze, and utilize video data. With AWS’s robust infrastructure and Twelve Labs’ innovative models, the potential to transform unstructured video data into actionable insights is immense. This collaboration is set to redefine the accessibility and utility of video data on a global scale.
Reply