Intelligence

NSFW

View Markdown

Overview

The NSFW Task detects explicit or unsafe content in images and videos.

It analyzes the visual content and returns confidence scores for several categories such as nudity, sexual content, violence, and gore.

When a task completes, it creates an Intelligence file with kind: "nsfw" and a .json output containing the detection results.


Example Output

{
  "id": "file_qrstuvwx9012",
  "object": "intelligence",
  "kind": "nsfw",
  "detected": true,
  "nudity": 0.82,
  "sexual": 0.24,
  "violence": 0.64,
  "gore": 0.04,
  "confidence": 0.82,
  "created": "2025-01-01T01:23:45Z",
  "updated": "2025-01-01T01:23:45Z"
}

Each field represents the confidence level (0–1) for that specific category.

A higher value means stronger detection likelihood.


Creating an NSFW Task

You can create an NSFW detection task for any image or video file using the ittybit SDK or a direct API request.

import { IttybitClient } from "@ittybit/sdk";

const ittybit = new IttybitClient({
  apiKey: process.env.ITTYBIT_API_KEY!
});

const task = await ittybit.tasks.create({
  kind: "nsfw",
  url: "https://example.com/image-or-video.mp4",
  description: "NSFW content detection analysis",
  webhook_url: "https://your-app.com/nsfw-webhook"
});

console.log("Task created:", task.id);
console.log("Status:", task.status);

Webhook Example

When the task completes, ittybit will send a POST request to your webhook_url with the results.

You can use this to automatically flag, moderate, or remove content.

app.post("/nsfw-webhook", async (req, res) => {
  const { id, kind, status, results } = req.body || {};

  if (kind !== "nsfw" || status !== "completed") {
    return res.status(200).send("Not a completed NSFW task");
  }

  // Example: update Supabase database
  const { error } = await supabase
    .from("uploads")
    .update({
      nsfw_detected: results?.is_nsfw || false,
      nsfw_confidence: results?.confidence || 0,
      processed_at: new Date().toISOString()
    })
    .eq("task_id", id);

  if (error) {
    console.error(error);
    return res.status(500).send("Update failed");
  }

  res.status(200).send("NSFW status updated in DB");
});

This example mirrors the production-ready implementation from

Check every Supabase upload for NSFW content.


File Structure

PropertyTypeDescription
idstringUnique file ID for the Intelligence file.
objectstringAlways "intelligence".
kindstringAlways "nsfw".
detectedbooleanWhether any unsafe content was detected.
nuditynumberConfidence score (0–1) for nudity detection.
sexualnumberConfidence score (0–1) for sexual activity or context.
violencenumberConfidence score (0–1) for violent content.
gorenumberConfidence score (0–1) for gore or graphic imagery.
confidencenumberOverall confidence score for the detection result.
created / updatedstring (ISO 8601)Timestamps for creation and last update.

Supported Inputs

NSFW tasks work with both image and video sources:

  • Image: .jpg, .jpeg, .png, .webp
  • Video: .mp4, .mov, .webm

Common Use Cases

  • User-generated content moderation
  • Automatic content filtering before publishing
  • Flagging or blurring unsafe media
  • Age-restricted platform compliance

Example Workflow Automation

You can combine NSFW detection with an automation workflow to process all new uploads automatically:

{
  "name": "Moderate new uploads",
  "trigger": {
    "kind": "event",
    "event": "media.created"
  },
  "workflow": [
    { "kind": "nsfw", "ref": "safety-check" }
  ],
  "status": "active"
}

This automation will run an NSFW task on every newly created media file.

On this page