Custom Avatar
Import a glTF model, configure skeletal animation, blend shapes, IK, and LOD for a custom avatar in Aether.
In this tutorial you will import a custom 3D avatar into Aether, set up its skeleton for GPU skinning, configure blend shapes for facial expressions, add inverse kinematics for VR tracking, define LOD tiers, and test the avatar in a running scene.
Prerequisites
- Aether source cloned and built
- A glTF 2.0 avatar model (
.glbor.gltf). Download a sample from Ready Player Me or use any VRM-compatible model. - Completed Build a Simple World or have a working Aether project
Avatar System Overview
The aether-avatar crate provides the full pipeline:
| Module | Purpose |
|---|---|
formats | Import glTF/VRM models |
skeleton / skeleton_eval | Bone hierarchy, matrix computation |
skinning | GPU skinning with bone matrix palettes |
blend_shapes | Facial expression and viseme targets |
ik / fabrik | 3-point and 6-point inverse kinematics |
calibration | T-pose calibration for body proportions |
avatar_lod | Distance-based level of detail |
performance_rating | Budget validation |
Set Up the Project
- Create a new crate and add dependencies:
cargo new --bin my-avatar-demo
mkdir -p my-avatar-demo/assets
cp ~/Downloads/my-avatar.glb my-avatar-demo/assets/
[dependencies]
aether-ecs = { path = "../crates/aether-ecs" }
aether-avatar = { path = "../crates/aether-avatar" }
aether-asset-pipeline = { path = "../crates/aether-asset-pipeline" }
aether-renderer = { path = "../crates/aether-renderer" }
Import the glTF Model
- Use the asset pipeline to import and validate the model:
use aether_asset_pipeline::{AssetImporter, GltfImportConfig};
fn import_avatar() -> aether_asset_pipeline::ImportedAsset {
let config = GltfImportConfig {
file_path: "assets/my-avatar.glb".into(),
generate_lods: true,
max_texture_size: 2048,
compress_textures: true,
};
let asset = AssetImporter::new().import_gltf(&config).expect("Import failed");
println!("Imported: {} verts, {} bones, {} blend shapes",
asset.vertex_count,
asset.skeleton.as_ref().map_or(0, |s| s.bones.len()),
asset.blend_shape_count);
asset
}
Set Up the Skeleton
- Extract and inspect the bone hierarchy:
use aether_avatar::skeleton::Skeleton;
fn setup_skeleton(asset: &aether_asset_pipeline::ImportedAsset) -> Skeleton {
let skel = asset.skeleton.as_ref().expect("Model has no skeleton");
println!("Skeleton: {} bones", skel.bones.len());
for (i, bone) in skel.bones.iter().enumerate() {
let parent = bone.parent.map(|p| skel.bone_names[p].as_str()).unwrap_or("(root)");
println!(" [{}] {} -> {}", i, skel.bone_names[i], parent);
}
skel.clone()
}
Aether expects standard humanoid bone names (Hips, Spine, Chest, Head, LeftUpperArm, etc.) for IK mapping.
Configure Skeletal Animation
- Set up GPU skinning. Aether supports
LinearBlend(fast, may artifact at twisted joints) andDualQuaternion(volume-preserving, recommended for humanoids):
use aether_avatar::skinning::{GpuSkinningConfig, SkinningMethod};
use aether_avatar::skeleton_eval::{compute_bone_matrices, SkeletonPose, BoneTransform};
fn setup_skinning(skeleton: &Skeleton) -> GpuSkinningConfig {
GpuSkinningConfig {
method: SkinningMethod::DualQuaternion,
max_bones: skeleton.bones.len() as u32,
max_vertices: 65536,
workgroup_size: 256,
}
}
fn compute_tpose(skeleton: &Skeleton) -> aether_avatar::skinning::BoneMatrixPalette {
let pose = SkeletonPose {
transforms: skeleton.bones.iter().map(|_| BoneTransform {
position: [0.0, 0.0, 0.0],
rotation: [0.0, 0.0, 0.0, 1.0],
scale: [1.0, 1.0, 1.0],
}).collect(),
};
compute_bone_matrices(skeleton, &pose)
}
Configure Blend Shapes
Blend shapes (morph targets) deform the mesh for facial expressions and lip sync.
- List and drive blend shapes:
use aether_avatar::blend_shapes::{BlendShapeSet, BlendShapeWeights, GpuBlendShapeConfig};
fn setup_blend_shapes(asset: &aether_asset_pipeline::ImportedAsset) -> BlendShapeSet {
let bs = asset.blend_shapes.as_ref().expect("No blend shapes");
for target in &bs.targets {
println!(" blend shape: {} ({} deltas)", target.name, target.deltas.len());
}
bs.clone()
}
fn make_smile() -> BlendShapeWeights {
let mut w = BlendShapeWeights::new();
w.set("mouthSmileLeft", 0.8);
w.set("mouthSmileRight", 0.8);
w.set("cheekSquintLeft", 0.3);
w.set("cheekSquintRight", 0.3);
w
}
Common names follow the ARKit convention: eyeBlinkLeft, jawOpen, mouthSmile, browInnerUp, etc.
Add Inverse Kinematics for VR Tracking
IK maps sparse VR tracking data (headset + controllers) to the full skeleton using the FABRIK algorithm.
- Configure 3-point IK (head + two hands):
use aether_avatar::ik::{IkSolver, ThreePointIkConfig, IkTarget};
fn setup_ik(skeleton: &Skeleton) -> IkSolver {
let config = ThreePointIkConfig {
head_bone: "Head".into(),
left_hand_bone: "LeftHand".into(),
right_hand_bone: "RightHand".into(),
spine_bones: vec!["Spine".into(), "Spine1".into(), "Spine2".into()],
max_iterations: 10,
tolerance: 0.001,
};
IkSolver::three_point(skeleton, config)
}
- Each frame, feed tracking data into the solver:
fn solve_ik(solver: &mut IkSolver, skeleton: &mut Skeleton) {
let head = IkTarget { position: [0.0, 1.7, 0.0], rotation: Some([0.0, 0.0, 0.0, 1.0]) };
let lh = IkTarget { position: [-0.3, 1.0, -0.4], rotation: Some([0.0, 0.0, 0.0, 1.0]) };
let rh = IkTarget { position: [0.3, 1.0, -0.4], rotation: Some([0.0, 0.0, 0.0, 1.0]) };
solver.solve(skeleton, &head, &lh, &rh);
}
- For full-body tracking (hip + feet trackers), use 6-point IK:
use aether_avatar::ik::SixPointIkConfig;
let config = SixPointIkConfig {
head_bone: "Head".into(), left_hand_bone: "LeftHand".into(),
right_hand_bone: "RightHand".into(), hip_bone: "Hips".into(),
left_foot_bone: "LeftFoot".into(), right_foot_bone: "RightFoot".into(),
max_iterations: 15, tolerance: 0.001,
};
let solver = IkSolver::six_point(skeleton, config);
Configure LOD
- Define distance-based LOD tiers to reduce rendering cost for distant avatars:
use aether_avatar::avatar_lod::{AvatarLodConfig, AvatarLodTier, select_lod_tier};
let lod_config = AvatarLodConfig {
tiers: vec![
AvatarLodTier::FullMesh { max_distance: 5.0 },
AvatarLodTier::Simplified { max_distance: 30.0 },
AvatarLodTier::Billboard { max_distance: 100.0 },
AvatarLodTier::Dot,
],
hysteresis: 1.0,
transition_duration: 0.3,
};
| Tier | Distance | Rendering |
|---|---|---|
| FullMesh | 0-5m | Full geometry, blend shapes, GPU skinning |
| Simplified | 5-30m | Reduced mesh, no blend shapes |
| Billboard | 30-100m | Camera-facing texture card |
| Dot | 100m+ | Single colored point |
The hysteresis value (1m buffer) prevents rapid LOD switching near boundaries.
Validate Performance
- Check that the avatar meets world budget requirements:
use aether_avatar::performance_rating::{validate_avatar, PerformanceBudgetTable, AvatarMeshStats};
let stats = AvatarMeshStats {
polygon_count: asset.vertex_count / 3,
material_slots: asset.material_count,
bone_count: asset.skeleton.as_ref().map_or(0, |s| s.bones.len()) as u32,
};
let rating = validate_avatar(&stats, &PerformanceBudgetTable::default());
println!("Rating: {:?} (passed: {})", rating.tier, rating.passed);
Default budgets: S (10k polys), A (25k), B (50k), C (75k). A world can require a minimum tier.
Test in a Scene
- Wire everything into a loop:
fn main() {
let asset = import_avatar();
let skeleton = setup_skeleton(&asset);
let _skinning = setup_skinning(&skeleton);
let _blend_shapes = setup_blend_shapes(&asset);
check_performance(&asset);
let mut ik_solver = setup_ik(&skeleton);
let mut skel = skeleton.clone();
let mut current_lod = AvatarLodTier::FullMesh { max_distance: 5.0 };
for frame in 0..300 {
solve_ik(&mut ik_solver, &mut skel);
let _palette = compute_tpose(&skel);
let distance = 3.0 + (frame as f32 * 0.1);
current_lod = select_lod_tier(&lod_config, ¤t_lod, distance);
if frame % 60 == 0 {
println!("Frame {}: LOD={:?}, dist={:.1}m", frame, current_lod, distance);
}
}
}
- Run the demo:
cargo run -p my-avatar-demo
T-Pose Calibration
For VR, calibrate the skeleton to match the player's real proportions:
use aether_avatar::calibration::{CalibrationData, calibrate_skeleton};
let calibration = CalibrationData {
head_height: 1.72, arm_span: 1.68, shoulder_width: 0.42,
};
calibrate_skeleton(&mut skeleton, &calibration);
Have the player stand in a T-pose and capture tracking positions. Calibration scales bone lengths proportionally so the avatar's limbs match the player's reach.
Next Steps
- Add lip sync by feeding audio through the
aether-avatar::visememodule - Connect avatar state to multiplayer (see Add Multiplayer)
- Explore subsurface scattering shaders in
avatar_shaderfor photorealistic skin - Set up animation state machines for procedural locomotion