
Making multimodal AI truly human-like is hard. We are a highly technical team with deep research and system experience on building multimodal assistants. Amit has a PhD in AI, worked as an AI research scientist at Meta, and trained multimodal assistant for smart glasses. He also researched with Meta SuperIntelligence Lab training multimodal LLMs and has 20+ publications at top conferences in AI including NeurIPS, CVPR, and ICASSP. For the last 4 years, Robert has been building advanced orchestration systems for running multimodal assistants on smart glasses, optimizing for latency and compute. Before Meta, Robert worked on developing research infrastructure at Citadel Securities.
World will have a human like multimodal AI assistant. Simplest way to say that is: if an expert human watching a video stream can help, we're building the AI that can too.