aether-content-moderation
Automated content scanning, review queues, and report system
The aether-content-moderation crate provides the content moderation pipeline for Aether, including automated scanning (text, image, WASM), a human review queue, severity classification, user report handling, and content rating.
Overview
All user-submitted content passes through the moderation pipeline before becoming visible. The system supports:
- Pluggable scanners for text, image, mesh geometry, and WASM bytecode analysis.
- Decision engine with configurable auto-approve and auto-flag rules.
- Review queue with priority ordering and claim/decide workflow for human moderators.
- Report system with aggregation, escalation thresholds, and category tracking.
- Severity classification with graduated enforcement actions.
- Content ratings for age-appropriate classification.
- Moderation status tracking as a state machine from pending through approved or rejected.
Key Types
ScannerPipeline
Runs a sequence of content scanners and aggregates their results.
use aether_content_moderation::{ScannerPipeline, ContentItem, ContentType, AggregatedScanResult};
let pipeline = ScannerPipeline::new();
let result: AggregatedScanResult = pipeline.scan(&ContentItem {
content_type: ContentType::Text,
data: text_bytes,
})?;
DecisionEngine
Evaluates scan results against configurable rules to auto-approve, auto-flag, or escalate.
use aether_content_moderation::{DecisionEngine, DecisionConfig, DecisionRule, Decision};
let engine = DecisionEngine::new(DecisionConfig {
rules: vec![
DecisionRule::AutoApproveBelow(0.3),
DecisionRule::AutoFlagAbove(0.8),
],
});
let decision: Decision = engine.evaluate(&scan_result);
ReviewQueue
A priority-ordered queue for human moderators to review flagged content.
use aether_content_moderation::{ReviewQueue, ReviewItem, ReviewAction, ReviewPriority};
let mut queue = ReviewQueue::new();
queue.enqueue(ReviewItem {
content_id: item_id,
priority: ReviewPriority::High,
flags: scan_flags,
});
// Moderator claims and decides
let item = queue.claim(moderator_id)?;
queue.decide(item.id, ReviewAction::Approve)?;
ReportAggregator
Collects user reports and triggers escalation when thresholds are exceeded.
use aether_content_moderation::{ReportAggregator, Report, ReportCategory, ReportSummary};
let mut aggregator = ReportAggregator::new();
aggregator.submit(Report {
reporter: user_id,
target: content_id,
category: ReportCategory::Harassment,
description: "Offensive avatar".into(),
});
let summary: ReportSummary = aggregator.summarize(content_id);
ContentSeverity
Classifies content violations by severity with corresponding enforcement actions.
use aether_content_moderation::{ContentSeverity, EnforcementAction};
let severity = ContentSeverity::High;
let action = severity.enforcement_action();
// Returns EnforcementAction::Remove or similar
WasmAnalyzer
Performs static analysis on WASM bytecode to detect malicious patterns.
use aether_content_moderation::{WasmAnalyzer, WasmAnalysisResult, WasmViolation};
let analyzer = WasmAnalyzer::new();
let result: WasmAnalysisResult = analyzer.analyze(&wasm_bytes);
for violation in &result.violations {
// Handle detected malicious patterns
}
ModerationStatus
State machine tracking the moderation lifecycle of a content item.
use aether_content_moderation::{ModerationStatus, InvalidTransition};
let mut status = ModerationStatus::Pending;
status = status.transition_to(ModerationStatus::InReview)?;
status = status.transition_to(ModerationStatus::Approved)?;
Usage Examples
Full Moderation Pipeline
use aether_content_moderation::{
ScannerPipeline, DecisionEngine, DecisionConfig, ReviewQueue,
ContentItem, ContentType,
};
let scanner = ScannerPipeline::new();
let engine = DecisionEngine::new(DecisionConfig::default());
let mut queue = ReviewQueue::new();
let scan = scanner.scan(&content)?;
let decision = engine.evaluate(&scan);
match decision {
Decision::Approve => publish(content_id),
Decision::Flag => queue.enqueue_from_scan(content_id, &scan),
Decision::Reject => reject(content_id),
}
Content Rating
use aether_content_moderation::{RatingCategory, RatingDecision};
let rating = RatingDecision::classify(&scan_result);
// Returns age-appropriate rating category