Skip to main content
← Back to API Reference

EDGE API Reference

Real-time video processing and AI inference at the network edge

API Reference
EDGE
<15ms Latency

API Overview

Base URL

https://api.wave.com/v1/edge

Authentication

Bearer {api_key}

Rate Limit

50,000 frames/minute

Processing SLA

<15ms per frame

POST
/processors
Processors

Deploy a new frame processor to edge locations

Request Body

{
  "name": "content-moderation-v2",
  "description": "Real-time NSFW detection and blur",
  "code": "export default async function process(frame, ctx) {\n  const result = await ctx.model.infer(frame);\n  if (result.nsfw > 0.85) {\n    return { action: 'blur', regions: result.regions };\n  }\n  return { action: 'pass' };\n}",
  "runtime": "javascript",
  "model": "moderation-v2",
  "gpu": true,
  "regions": ["us-east-1", "us-west-2", "eu-west-1", "ap-southeast-1"],
  "timeout": 12,
  "resources": {
    "tier": "gpu-standard",
    "cpu": "4-core",
    "memory": "8GB",
    "gpu": "nvidia-t4"
  },
  "config": {
    "confidenceThreshold": 0.85,
    "blurIntensity": 25,
    "sampleRate": 5
  }
}

Response (201)

{
  "success": true,
  "data": {
    "id": "proc_a1b2c3d4e5f6",
    "name": "content-moderation-v2",
    "status": "deploying",
    "version": 1,
    "runtime": "javascript",
    "model": "moderation-v2",
    "gpu": true,
    "regions": ["us-east-1", "us-west-2", "eu-west-1", "ap-southeast-1"],
    "deploymentProgress": {
      "us-east-1": "deploying",
      "us-west-2": "pending",
      "eu-west-1": "pending",
      "ap-southeast-1": "pending"
    },
    "consoleUrl": "https://console.wave.com/edge/proc_a1b2c3d4e5f6",
    "createdAt": "2024-11-20T15:45:00Z"
  }
}

Code Example

JavaScript
import { WaveClient } from '@wave/api-client';

const wave = new WaveClient({ apiKey: process.env.WAVE_API_KEY });

const processor = await wave.edge.processors.create({
  name: 'content-moderation-v2',
  description: 'Real-time NSFW detection and blur',
  runtime: 'javascript',
  model: 'moderation-v2',
  gpu: true,
  regions: ['us-east-1', 'us-west-2'],
  timeout: 12,
  resources: {
    tier: 'gpu-standard',
  },
  code: `
    export default async function process(frame, ctx) {
      const result = await ctx.model.infer(frame);
      if (result.nsfw > 0.85) {
        return { action: 'blur', regions: result.regions };
      }
      return { action: 'pass' };
    }
  `,
});

console.log(`Processor deployed: ${processor.id}`);
GET
/processors
Processors

List all deployed processors with filtering

GET
/processors/:id
Processors

Get detailed processor information

PATCH
/processors/:id
Processors

Update processor configuration or code

POST
/processors/:id/process
Processing

Process a single video frame through the processor

POST
/processors/:id/batch
Processing

Process multiple frames in a single request

GET
/processors/:id/metrics
Monitoring

Get detailed processor performance metrics

POST
/processors/:id/pause
Processors

Pause processor (frames bypass processing)

POST
/processors/:id/resume
Processors

Resume a paused processor

POST
/processors/:id/rollback
Processors

Rollback to a previous processor version

DELETE
/processors/:id
Processors

Delete a processor and stop all processing

GET
/models
Models

List available AI models for edge processing

GET
/regions
Infrastructure

List available edge regions with status

Error Codes

400 Bad Request

Invalid frame format or missing required fields

401 Unauthorized

Invalid or missing API key

408 Processing Timeout

Frame processing exceeded <15ms SLA

429 Too Many Requests

Rate limit exceeded (50,000 frames/minute)

503 Service Unavailable

Region temporarily unavailable

507 Insufficient Resources

GPU/CPU capacity unavailable in region

Best Practices

Sample Strategically

Process 1-5 fps for analysis, full rate only for inference that modifies frames.

Monitor P99 Latency

Alert if P99 exceeds 14ms to stay within <15ms SLA guarantee.

Cache Results

Cache model results for similar frames to reduce redundant processing.

Deploy Regionally

Start with 2-3 regions, validate performance, then expand globally.

Use Batch API

For VOD processing, batch API is 40% more efficient than single-frame calls.

Version Rollbacks

Keep previous versions ready. Auto-rollback on error rate spikes.

WAVE - Enterprise Live Streaming Platform