What you can do
Side view analysis
Capture a runner from the side to get biomechanical scores, running metrics (speed,
step length, cadence, ground contact time), joint angles, overstride analysis, foot
landing patterns, and running style classification.
Back view analysis
Capture a runner from behind to get pronation/supination data, pelvic drop, knee
adduction, hip adduction, and gait cycle metrics for each leg.
Subject management
Create subjects to group analyses by individual runner. Track metadata,
link analyses, and retrieve per-subject history.
Video requirements
Upload an MP4, MOV, AVI, or WEBM file — max 10 seconds, max 30 MB, 1080p recommended at 30 fps or higher. The required orientation depends on the analysis scenario:| Scenario | Orientation |
|---|---|
| Side view — overground | Landscape (16:9) |
| Side view — treadmill | Portrait (9:16) |
| Back view — any | Portrait (9:16) |
How it works
Upload a video
Send a short video of a runner to the
start analysis endpoint.
Make sure the video meets the orientation and format requirements
for the chosen analysis type. You receive a
video_id immediately.Poll for results
Use the get results endpoint
with your
video_id. The status progresses from pending to analyzing to success.
You can also use webhooks to receive pushed updates instead of
polling only.Retrieve data
Once the analysis is complete, the results endpoint returns biomechanical data,
signed URLs for the annotated video and thumbnail. You can also fetch
calculated metrics
and raw frame data separately.
Use with AI coding agents
This documentation is designed to be agent-friendly. The entire API is described by an OpenAPI 3.0 specification with detailed descriptions, typed schemas, and realistic examples for every endpoint and response — making it easy for AI coding assistants to understand and generate accurate integration code. Two machine-readable files are available at the root of this documentation site (same access code as the docs):| File | URL | Purpose |
|---|---|---|
| llms.txt | https://docs.ochy.io/llms.txt | Documentation index — lets an AI agent discover all available pages |
| openapi.yaml | https://docs.ochy.io/openapi.yaml | Full API specification — endpoints, parameters, schemas, and examples |
Setup
Save the OpenAPI spec to your project so AI tools can reference it locally:Download the spec
Download openapi.yaml and
place it at the root of your project (or any convenient path).
Give your AI tool context
Point your coding assistant at the local file. See the tool-specific
instructions below.
Cursor
Reference the local spec file in chat with@openapi.yaml, then ask Cursor
to generate integration code.
For permanent context, create a Cursor rule at .cursor/rules/ochy-api.mdc:
Other AI tools
| Tool | How to use |
|---|---|
| GitHub Copilot | Open openapi.yaml in a tab or reference it in chat with #file:openapi.yaml |
| ChatGPT / Claude | Upload the spec file, then ask for integration code |
| Windsurf | Reference openapi.yaml in chat with @openapi.yaml |
| Cline / Aider | Point at the spec file in your project root |
Next steps
Quickstart
Run your first analysis in under 5 minutes.
Authentication
Learn how API key authentication works.
Analysis workflow
Understand the full analysis lifecycle, video constraints, and polling strategy.
Analysis webhooks
Receive progress and completion events on your server without polling.
Subject management
Organize analyses by individual runner with subjects.
API reference
Browse all endpoints with request and response schemas.