Render-Free Audiovisual Platform —
AI-powered collaboration for editors and producers
AI-powered platform that enables editors, producers, and clients to review and annotate video content in real time — without rendering.
Role Lead UX Designer
Timeline 5 months
Team PM, UX, Engineering, Field Ops, QA
Users Site Engineers, Supervisors, Operations Managers
Outcome Shipped and adopted across active projects
Overview
In creative production, video collaboration is slow and fragmented. Teams render files, share screenshots, and give feedback across email, Slack, and PDFs — losing time, clarity, and version control.Gravitad is building an AI-powered platform that enables editors, producers, and clients to review and annotate video content in real time — without rendering.
It merges real-time streaming, contextual feedback, and version intelligence into one collaborative workspace — helping creative teams work faster and smarter across geographies.
Role
As Lead UX Designer, I led the end-to-end product design — from research and UX strategy to system architecture and prototyping — collaborating closely with product, engineering, and ML teams to define the platform’s user experience and design system.
Outcome
-
Established the UX strategy and design system for an AI-first collaboration product
-
Reduced review and feedback loops through real-time, intelligent design flows
-
Created scalable UX patterns now used across 4 internal product teams
-
Defined the human–AI interaction model that guides all future features
Research & Discovery – Understanding the Problem
Research Approach
To uncover pain points, I conducted:
-
12 in-depth interviews with editors, producers, and post supervisors
-
Workflow audits of real production setups
-
Task shadowing to observe review and approval cycles
Insights

From Research to Product Vision
Based on research insights, I facilitated a concept sprint with Product and ML teams to identify where AI could add real value.
We defined three core experience pillars:

I mapped opportunities where AI could assist:
-
Scene recognition → jump to relevant frames
-
Contextual comment tagging → faster communication
-
Version comparison → automated diffs
-
Adaptive playback → seamless performance
These insights became the foundation for the first MVP scope.
🎨 Design Process & Execution
1️⃣ Ideation & Prototyping
-
Sketched early workflows for AI-assisted review, annotation, and version diffing
-
Created low-fi prototypes in Figma + ProtoPie
-
Ran Wizard-of-Oz tests to simulate AI tagging and verify user expectations before backend integration
-
Iterated based on usability feedback to refine clarity, trust, and control
2️⃣ System Design & Collaboration
-
Built a tokenized Figma + Storybook design system for scalable, cross-platform consistency
-
Defined UX documentation, design tokens, and accessibility standards (WCAG AA)
-
Established UX–engineering QA routines for design handoff and iteration
3️⃣ Designing for AI
-
AI-Assisted Video Understanding: Designed editable AI scene tags and timeline visualizations based on ML outputs
-
Contextual Comment Intelligence: Created confidence indicators and override flows for AI-suggested comment tags
-
Smart Version Management: Designed visual diff interfaces highlighting AI-detected changes
-
Adaptive Playback: Defined visual states and animations for AI-optimized streaming feedback
-
AI Analytics Dashboards: Translated ML insights into digestible visual patterns and team-level metrics