
Memories AI Is Building the Visual Memory Layer for Wearables and Robotics

Why It Matters
Visual memory enables embodied AI to recall and act on past visual experiences, a critical capability for wearables and robots operating in the physical world. This partnership accelerates the deployment of such technology across emerging edge markets.
Key Takeaways
- •Memories.ai partners with Nvidia for visual memory tech
- •LVMM launched July 2025, comparable to Gemini Embedding 2
- •Raised $16M; investors include Susa Ventures, Seedcamp
- •Qualcomm partnership to run LVMM on its processors
- •Focus on model infrastructure, not hardware commercialization yet
Pulse Analysis
Visual memory has emerged as the missing piece for AI systems that operate in the physical world. While large language models have added text‑based recall, they cannot index the flood of video data generated by wearables or robots. Memories.ai tackles this gap by leveraging Nvidia’s Cosmos‑Reason 2 vision‑language model and Metropolis video‑search platform to embed, index, and retrieve visual streams in real time. This infrastructure transforms raw footage into searchable memory, enabling devices to reference past scenes much like humans do.
The company’s Large Visual Memory Model (LVMM), launched in July 2025, already rivals Google’s Gemini Embedding 2 in multimodal retrieval performance. By building a custom data‑collection device, LUCI, Memories.ai gathered high‑quality video for training without relying on off‑the‑shelf recorders. A recent partnership with Qualcomm will allow the LVMM to run efficiently on edge processors, a critical step for battery‑constrained wearables and autonomous robots. Although the firm does not plan to sell hardware, the LVMM’s portability positions it for integration with major wearable manufacturers that are still under nondisclosure.
With $16 million raised from investors such as Susa Ventures and Seedcamp, Memories.ai is well‑capitalized to scale its infrastructure while the broader market for AI‑enabled wearables and robotics expands. Competitors like OpenAI, xAI, and Google focus primarily on textual memory, leaving a clear niche for visual recall solutions. As edge AI chips become more powerful, the demand for on‑device visual memory will accelerate, potentially unlocking new use cases in augmented reality, industrial inspection, and autonomous navigation. Memories.ai’s early mover advantage and Nvidia partnership could make it a foundational layer for the next generation of embodied AI.
Comments
Want to join the conversation?
Loading comments...