Devlog #3: Implementing a Disintegration Effect with the Unity HDRP

In this post, I discuss how I implemented a mesh disintegration effect for my game, Doors & Corners. The effect was built using Unity 2020 and the High-Definition Render Pipeline (HDRP).

A demonstration video of the disintegration effect

In this post, I discuss how I implemented a mesh disintegration effect for my game, Doors & Corners. This effect was built using Unity 2020 LTS and the High-Definition Render Pipeline (HDRP) 10.x. A link to the source can be found at the bottom of this post.

Background

Doors & Corners is a third-person shooter with a simple, low-poly art style. In the initial implementation, whenever a character was killed they simply disappeared. This was a functional approach for handling dead characters, but it had two downsides. First, it wasn’t very visually appealing – just disappearing isn’t that interesting. Second, it didn’t provide the player with any clues to the fate of a character they were shooting at. If a target disappeared, was it because the target was hit? Or was it that the target was now just hidden behind an obstacle or cover?

To address these shortcomings, I decided to implement a dissolve/disintegration effect that plays when a character is killed. I thought this fit well with Doors & Corners because the guns in it have an energy weapon vibe to them.

There are many demos and tutorials about how to achieve a dissolve effect in Unity (see this one from Brackey’s as an example). A typical dissolve effect works by using a shader to gradually transition a mesh’s texture to transparent, often with an edge that appears to burn or glow. As the effect moves across a face in the mesh, the shader will transition the pixels from their base color, to burning/glowing, and finally to transparent. This presented a problem though, as it conflicted with one of the guidelines for the art style of Doors & Corners - to only use one color per face of a mesh. Given this, I felt like the typical dissolve effect didn’t really fit with the specific art style of the game.

Instead, I decided to draw inspiration from a project by Keijiro Takahashi which experimented with using Unity's Data-Oriented Tech Stack (DOTS) to decompose meshes triangle by triangle. I felt that breaking a mesh apart triangle by triangle fit well with the art style. Additionally, I thought the disintegrate effect could be enhanced if, as the mesh came apart, the triangles that had separated from the mesh were converted to square particles.

Implementation

One of the easiest ways to implement an effect that alters the triangles of a mesh is to use a geometry shader. Unfortunately, geometry shaders aren’t really supported by HDRP. Technically they are, but they require writing a custom code shader which means losing access to the HDRP lighting functionality. You can get around this by exporting the code from a basic shader built in ShaderGraph for HDRP and then editing it to add the geometry shader, but this is an extremely brittle approach that is likely to break every time HDRP is upgraded. Another problem with geometry shaders is that they aren’t supported by some platforms (e.g., Apple's Metal), and may be replaced in the future by compute shaders.

Instead, I opted to break down the mesh using C# code and then handling the manipulation and rendering of each triangle independently. This is done by reading the mesh data for the model being disintegrated, looping over each triangle, and generating a new mesh for each triangle in the mesh. The code to do this is pretty straightforward:

private void DecomposeMesh () {
  // store local references to mesh data 
  var meshTriangles = sourceMesh.triangles;
  var meshVertices = sourceMesh.vertices;
  var meshUV = sourceMesh.uv;
  var meshNormals = sourceMesh.normals;
  
  // allocate the lists
  var triangleCount = meshTriangles.Length;
  triangleMeshes = new List<Mesh> (triangleCount);
  initialPositions = new List<Vector3> (triangleCount);
  triangleNormals = new List<Vector3> (triangleCount);
  particleLifetimes = new List<float> (triangleCount);
  
  // process every triangle in the source mesh
  for (int t = 0; t < triangleCount;) {
    // get the indices for the current triangle
    var t0 = meshTriangles[t];
    var t1 = meshTriangles[t + 1];
    var t2 = meshTriangles[t + 2];
	
    // get the vertices
    var vertex0 = meshVertices[t0];
    var vertex1 = meshVertices[t1];
    var vertex2 = meshVertices[t2];
	
    // calculate the triangle position (in object-space coordinates)
    var trianglePosition = (vertex0 + vertex1 + vertex2) / 3f;
	
    // create vertex array for triangle mesh. vertex positions
    // should be in object-space coordinates (for the disintegratable
    // object)
    var vertices = new Vector3[3] {
      vertex0 - trianglePosition,
      vertex1 - trianglePosition,
      vertex2 - trianglePosition
    };

    // create uv array for triangle mesh
    var uvs = new Vector2[3] {
      meshUV[t0],
      meshUV[t1],
      meshUV[t2]
    };

    // get the normals
    var normal0 = meshNormals[t0];
    var normal1 = meshNormals[t1];
    var normal2 = meshNormals[t2];

    // create normal array for triangle mesh
    var normals = new Vector3[3] {
      normal0,
      normal1,
      normal2
    };

    // calculate the triangle normal (average of vertex normals)
    var triangleNormal = (normals[0] + normals[1] + normals[2]) / 3;

    // create a new mesh for the current triangle from the source mesh
    var triangleMesh = new Mesh {
      vertices = vertices,
      uv = uvs,
      normals = normals,
      triangles = new int[] { 0, 1, 2 }
    };

    // calculate the particle lifetime
    var lifetime = Random.Range (minLifetime, maxLifetime);

    // add the triangle/particle data to the lists
    initialPositions.Add (trianglePosition);
    triangleNormals.Add (triangleNormal);
    triangleMeshes.Add (triangleMesh);
    particleLifetimes.Add (lifetime);

    t += 3;
  }
}

In addition to creating a new mesh for each triangle, the code also calculates its position (in objectspace coordinates), generates a lifetime (in seconds), and stores the normal for later use. All this data is immutable and based only on the mesh data, so it can be done ahead of the disintegration effect being triggered (e.g., on Start).

When the disintegration is triggered, there is additional data that needs to be generated for each triangle/particle. The disintegration system generates the velocity and the delay of each individual particle based on its proximity to the “origin point” of the effect. The origin point is the (worldspace) coordinates where the disintegration is originating from. Since the origin point can only be known when the effect is triggered, this information cannot be generated ahead of time.

Once the execution time data is generated, a new GameObject is spawned to handle the display of the event. This is done so that the source GameObject can be altered or destroyed without impacting the display of the effect. The Play method is invoked on the newly spawned object and with the generated data passed as parameters.

public void Disintegrate (Vector3 originPoint) {
  // count of triangles/particles
  var count = initialPositions.Count;

  // create lists for generated data
  // the velocities for each particle
  var particleVelocities = new List<Vector3> (count);
  
  // the delay for each particle
  var particleDelay = new List<float> (count);
  
  // the age of each particle
  var particleAge = new List<float> (count);
  
  // the indices of the active particles in the data lists
  var activeParticles = new List<int> (count);

  // translate origin point from worldspace to objectspace
  originPoint -= transform.position;

  // get the object rotation
  var objectRotation = transform.rotation;

  // loop over particles
  for (int i = 0; i < initialPositions.Count; i++) {
    // get particle position and normal, rotated by the object rotation
    var position = objectRotation * initialPositions[i];
    var normal = objectRotation * triangleNormals[i];

    // calculate velocity
    // velocity is the speed moving from the particle away from the origin,
    // and also moving along the normal of the triangle in the source mesh,
    // combined with the uniform drift.
    var velocity = (position - originPoint).normalized * particleSpeed;
    velocity += normal * particleSpread;
    velocity += particleDrift;

    // calculate the delay
    // delay is determined by the distance of the particle from the origin
    // point plus a random value between the variance min and max.
    var delay = Vector3.Distance (position, originPoint) * delayFactor;
    delay += Random.Range (delayVarianceMinimum, delayVarianceMaximum);
		
    // add data to the lists
    particleVelocities.Add (velocity);
    particleDelay.Add (delay);
    particleAge.Add (0);

    // add the index of the current particle/triange to the active list
    activeParticles.Add (i);
  }

  // disable the source renderer
  sourceRenderer.enabled = false;

  // Create and Play the effect
  var effect = DisintegrateEffect.Create (transform.position, transform.rotation);
  effect.Play (triangleMeshes, sourceMaterial, particleMesh, particleMaterial,
    initialPositions, particleVelocities, particleLifetimes, particleDelay, 
    particleAge, activeParticles
  );
}

The effect system is essentially a simple particle system. On Update, it loops over all the active particles, updating their age and position, rendering them, and removing those whose age has exceeded the lifetime. Particles with an age less than the delay value assigned to them do not have their position modified and are rendered as the triangle they represent from the original mesh. The particles whose age exceeds the delay have their position updated based on their velocity and are rendered using the shared particle mesh. Additionally, these particles are scaled down in a linear fashion to a size of zero as their age approaches their lifetime.

private void Update () {
  // exit if active particle count is zero
  if (activeParticles.Count == 0) {
    return;
  }

  RenderEffect ();
}

private void RenderEffect () {
  // store a local reference to the main camera
  var mainCamera = Camera.main;

  // get the camera rotation (particles need to be facing the camera)
  var cameraRotation = mainCamera.transform.rotation;

  // locally store object position and rotation
  var objectPosition = transform.position;
  var objectRotation = transform.rotation;

  // loop over the active particles to render and update their data. loop 
  // backward because the list will be deleted from while looping.
  for (int i = activeParticles.Count - 1; i >= 0; i--) {
    // get the particle index for the current active particle
    var index = activeParticles[i];

    // is the particle still within the delay window?
    var isDelayOver = particleAges[index] > particleDelays[index];

    // determine the scale based on the current age (only scale once the delay is over)
    var scale = isDelayOver ? 
      (particleAges[index] - particleDelays[index]) / particleLifetimes[index] : 0;

    // determine the speed step based on whether it is still within the 
    // delayed window (if so, the speed is zero)
    var speedStep = isDelayOver ? 1 : 0;

    // calculate the position based on the initial position, velocity, age, 
    // and object rotation
    var position = initialPositions[index] + (
      (particleAges[index] - particleDelays[index]) * 
      speedStep * particleVelocities[index]
    );
    position = (objectRotation * position) + objectPosition;

    // when within delay, render the triangle. 
    if (isDelayOver) {
      var matrix = Matrix4x4.TRS (position, cameraRotation, 
        Vector3.one * (1 - scale));
      renderMatrices.Add (matrix);
    }
    
    // after the delay, render a particle.
    else {
      var matrix = Matrix4x4.TRS (position, objectRotation, Vector3.one);
      Graphics.DrawMesh (triangleMeshes[index], matrix, sourceMaterial, 0,
        mainCamera, 0, null, true, true, false);
    }

    // update the age
    particleAges[index] += Time.deltaTime;

    // delete the particle if it has exceeded its lifetime
    if ((particleAges[index] - particleDelays[index]) >= particleLifetimes[index]) {
      // swap this active particle to the back of the list and remove
      activeParticles.RemoveAtSwapBack (i);
    }
  }

  // draw the particles
  if (renderMatrices.Count > 0) {
    DrawParticles (mainCamera);
  }
}

private void DrawParticles (Camera camera) {
  // for each TRS matrix, render a particle
  for (int i = 0; i < renderMatrices.Count - 1;) {
    // get the data to draw (DrawMeshInstanced only support drawing 1023 
    // meshes at a time)
    var count = Mathf.Min (1023, renderMatrices.Count - i);
    var matrixRange = renderMatrices.GetRange (i, count);

    // draw instanced
    Graphics.DrawMeshInstanced (particleMesh, 0, particleMaterial, matrixRange,
      null, ShadowCastingMode.On, true, 0, camera);
    i += count;
  }

  // clear the render matrices for the next frame
  renderMatrices.Clear ();
}

Performance Considerations/Optimizations

To minimize the performance impact when the effect is triggered, the code precalculates as much information as possible. The mesh decomposition and generation of per particle data that is independent of trigger-time information are executed before the effect is triggered (in the Start method). This creates additional overhead on startup, but this is less impactful than performing these calculations at the time the effect is triggered.

To achieve the best possible performance, it is necessary to minimize the amount of work done by the garbage collector. In the initial implementation, the code that handled building the individual meshes from the source mesh referenced the triangle, vertex, uv, and normal arrays for the source mesh from within the loop.

This created a great deal of work for the garbage collector, which in turn had a very negative impact on performance (the frame in which the effect was triggered was taking an average of more than 200ms to execute). To address this issue, the code was refactored to store local references to the mesh data arrays. This dramatically reduced the work of the garbage collector and resulted in a huge reduction in the time to generate the particle meshes.

The use of Graphics.DrawMesh() and Graphics.DrawMeshInstanced() to render the particle meshes was another performance optimization. GameObjects have a huge amount of overhead, so treating each particle as a separate GameObject with its own Renderer was never considered. Tracking the particle data from within a single effect GameObject and rendering the particles using the DrawMesh methods from the Graphics class significantly improves performance by avoiding this overhead.

The use of DrawMeshInstanced() created another performance optimization opportunity, the use of GPU Instancing. Normally, GPU Instancing cannot be utilized with the HDRP, because it is incompatible with the Scriptable Render Pipeline (SRP) Batcher. However, when Graphics.DrawMeshInstanced() is called to render the particles directly, GPU Instancing can be utilized. Since the effect uses the same mesh and material to render the particles that have aged beyond the delay (i.e. those that are being rendered as particles), the use of GPU Instancing provides a dramatic performance boost.

Limitations

The current implementation has several notable limitations:

  • It cannot use transparent materials for particles - There is a bug in the current version of HDRP where calls to DrawMeshInstanced do not render transparent materials correctly. It will render the shadow, but not the mesh itself.
  • It does not handle deformable meshes - Since it reads the source mesh ahead of time when generating the triangle meshes, it cannot account for any mesh deformations that are being applied when the effect is triggered.
  • It does not perform any tessellation when generating the triangle meshes - This means that if the mesh contains some very large triangles, large gaps will appear as the triangles are converted to particles during the disintegration. It also means that if there is a large variation in the size of the triangles in the source mesh, then there will be a correspondingly large variation in the size of the gaps that appear as the mesh is disintegrated.

Future Performance Optimizations

There are a few potential performance optimizations that I’ve identified for exploration in the future. First, the current code naively assumes that every triangle in the original mesh is unique. In the case of my project, this is very wasteful. Many of the triangles in my models are identical. If these used a shared triangle mesh, rather than each having a unique mesh, then the triangle meshes could be rendered with DrawMeshInstanced (as the particle meshes are). This would also allow for the use of GPU Instancing.

Additionally, the current implementation is a MonoBehaviour that gets attached to each Renderer GameObject that can be disintegrated. This won’t scale that well because it duplicates a lot of work. When multiple GameObjects have the same mesh, that mesh is decomposed multiple times (once for each GameObject). Refactoring the system so that GameObjects reuse decompositions for meshes that have already been calculated would boost performance.

Finally, this system would likely benefit from the use of the Data-Oriented Tech Stack (DOTS). Many of the calculations it makes are basic math operations being performed on arrays of primitive data. This makes it an ideal candidate for using Burst compilation and for processing the data in parallel using the Jobs system.

Public Repository and Future Updates

I’ve made the proof of concept available as a project on GitHub. Feel free to use this as a reference if you’re trying to create a similar system.

As I integrate this effect into Doors & Corners, I am certain that I will run into new challenges and discover additional opportunities for optimization. I plan to write follow up posts as I go to share this information.