Unity: High CPU on Small Projects

Quick Tip: I have been working on a TCP/IP Networking project using a client/server architecture. The client (and the server for that matter) are both relatively small code bases and the UI and object count are really low in the scene. I had been struggling with CPU load in the project and feverishly trying to work out why my code was baking the CPU (and GPU!). I’d assumed it was something stupid I had done in a loop with the networking structures I was not that familiar with. It’s really not easy to concentrate on new code when your laptop fan is literally screaming at you! I’d hit Play and the CPU would spike almost immediately. So I would switch to my local terminal and scrape through the open ports and network connections looking for a smoking gun. Turns out it was the default frame rate in the Editor trying to deliver the fastest graphics performance it could on my PC – and with such a low object count and and very simple graphics being asked for it was running like a Formula One race car when all I wanted was an old jalopy.

This is my CPU on Speed

Solution: Set Target Frame Rate!

A Unity project will attempt to run your project as fast as possible. Frames will be rendered as quickly as they can (limited by your display device’s refresh rate).

There are two ways to control frame rate:

Application.targetFrameRate – controls the frame rate by specifying the number of frames your game tries to render per second. (I wrote a script to use this – see below).

QualitySettings.vSyncCount – specifies the number of screen refreshes to allow between frames. (look for it in the Editor Settings). For a 60Hz display, setting vSyncCount=2 will cause Unity to render at 30fps in sync with the display.

Note that mobile platforms ignore QualitySettings.vSyncCount and use Application.targetFrameRate to control the frame rate.

The default value of Application.targetFrameRate is -1. (the platform’s default target frame rate)

I set mine using the script to 20 and when I hit Play got this result:

This is my CPU chilling out
using UnityEngine;

public class SetFrameRate : MonoBehaviour
{		
		[SerializeField]	// Just so you can check it in the inspector
		private int FrameRate = 20; // 20 is really low but got my CPU down to < 10% - 30 is the target for mobile and was < 20% CPU usage
		//private int FrameRate = -1; // reset to default
	
		private void Awake()
		{
			Application.targetFrameRate = FrameRate;
		}
}

I attached it to my Camera object.

Set Frame Rate

One interesting behavior of setting this using a script in Unity 2020.3.26f1 was that once it was attached to the Camera object and Play was initiated for the first time it must have set the frame rate somewhere internally in the Engine. When I removed the script (for testing) the frame rate did not automatically reset to -1. I had to re-attach the script and update it to set the frame rate back to the default. I had a search of the settings in the Inspector and Preferences and couldn’t find a visible reference to it anywhere so you have to be careful if you are going to put this on a Production build that you reset it before releasing otherwise you might end up with a lower frame rate than what the platform could achieve by default.

Enough procrastination – back to sockets, ports and buffers.

Blender 2D Animation with Meshes

This is a follow on from the workflow discussed in the previous post: Preparing 2D Art for Animation.

This is the end result of the process described:

Sprightly Spring Deer

I’m looking to see if there are any advantages to using Blender as a 2D Animation tool using meshes over Unity’s Spline Sprite based animation system. The differences between them at the effort and usability/flexibility layer are many and subtle. Hence the investigation. The two biggest differences for me is that 1. With the Blender animation option you are animating in Blender (which I like much more than animating in Unity). But the down side is that you have to import the animations into Unity and it’s pretty hard to modify once they are there. Which also means that it’s harder to adjust them to react to other actors, objects, and scene elements once you get it into the game. 2. With the Blender approach it’s a mesh in Unity not a Sprite so you can do all the transforms that mesh’s support. You can also light it as a mesh (the default Sprite Renderer cannot be lit). Being able to use light effects on a 2D image within the game is pretty huge for making it look pretty and making effects or plot devices (think lightning on a dark and stormy night). You can get light effects on Sprites in Unity if you swap out the default shader with another shared and with the Light Weight Render Pipeline in Unity (LWRP) but not every project will suit that. There are also Unity solutions that use custom shaders or use a similar mesh and material based solution (see further below for more on that).

Comparing Unity Sprites to Blender Meshes in Unity

The images directly below are taken from the Game Screen in Unity. The one on the left is a Sprite based Spline rendering while the one on the right is the Mesh based fbx from Blender. You can see the difference in quality between the Sprite on the left and the lossy baked images of the Mesh on the right – it’s not huge and can be improved with some tweaking (Bilinear Filter mode and upping the Ansio Level to 2 helped with the anti-aliasing and working with the material Metallic and Smoothness parameters also helped).

Sprite (left) and Mesh (right)
Night Time lighting affects the Blender mesh image but not the Sprite based image.
Lighting effects can be much more complex and creatively arranged to hit separate parts of the mesh.

As stated above you can drop an image onto an object in Unity as a material but it doesn’t light as well and is prone to shadowing. Use the Cutout and not the Transparent Rendering Mode in Unity or you get this shadow on the transparency. The below image shows a material with a standard shader with an image on a Unity 2D plane mesh but there is a shaded square around the outside that marks the image boundary.

Transparency Shader

The image below is the same sprite using a material with a standard shader and a cutout rendering mode (the diffuse sprite shader worked similarly). The top one is a normal sprite renderer with the custom material replacing the default-sprite material. The bottom one is a Unity 2D Plane with the custom material applied. Both tests look better than the quality of the Blender imported model and could be layered and they react with lighting in game.

So these are the alternatives to the process I’m describing below with Blender and they are good and valid options. I guess the only reason why I would choose to use the Blender animation workflow is because I hate doing this process in Unity’s Animator window. Add Property | drill down through the object | the child | the other child | the bone | the transform | and finally the tiny little plus sign that let’s me add one manipulation point! For a Deer Kick I had 88 different animation points – that’s a LOT of stupid clicking down through an object hierarchy to add Properties (I know you can hold down shift and add more than one property at a time but you still have to manually expand them all). The other alternative is to right click and add all properties for an object and then if you are patient enough you can remove the one’s you don’t use.

I do like the record feature that adds properties dynamically but these problems and that I find the interface finicky and too small made me look at Blender.

Importing the Images to Blender and Setting up the Workspace

Moving on to working in Blender with images and Meshes the basic process is this:

  1. For every layer in the artwork of our animated character we exported a separate image file on a transparency. Each png file is imported into Blender as an empty image object (Add | Empty | Image) you could use a reference or background image but since all the parts might move I wanted to group them all under empties.
  2. A Mesh is created for each image and either shaped to the outline of the image or left as a plane and weighted correctly (more on that later).
  3. The image is baked into the UV of the mesh.
  4. The components are then parented to an Armature with automatic weights.
  5. The meshes are weight painted to correct the deforms.
  6. Now it’s ready for animation.

The image objects are all placed at the same origin (0, 0, 0) and rotated 90 degrees on the ‘x’ Axis so they are visible in the viewport from the “front” view.

All the Deer components Frankenstein’d together into a whole
The visibility of parts are toggled on and off so individual pieces can be worked on.

Making the Mesh’s

For each piece a mesh is made. I took two approaches here: 1. Model a plane mesh as closely as I could to the shape of the sprite. 1. Use a plain rectangular mesh and use weight painting to deform correctly.

To start with the modelling approach I started with an image and dragged a plane in edit mode over it as a wireframe. The origin of the plane was kept at 0, 0, 0 so all the pieces that were made had a common reference (same as all the images). Using basic mesh deform and subdivision I created a mesh that matched the image.

The foreleg Mesh

The method was a lot of work manually placing each vertex on the border of the image boundary. If the vertex is placed a little bit outside the image you get a white space on the final product and if you don’t come all the way to the edge you lose some of the black line and smooth finish (UV mapping is slightly out). Plus I found that if you have to warp the mesh too much for a sharp angle or awkward placing of the square tiling you get some minor defects along the line during animation.

Vertices placement

After about the fourth component I got a bit sick of manually moving around vertexes. So I took another approach of just using a rectangular mesh and relying on the transparency of the image to do all the work. This is much easier and faster but there were gotcha’s during adding the armature and weight painting. This rear leg below is just one big mesh subdivided into enough squares to give a decent deform without stretching or warping the black line during animation.

Venison

In Solid shading here is a comparison of the rear leg mesh and the front leg mesh.

Solid Mesh Planes

The image below is both meshes in Render mode (including the armature) and you really can’t tell the difference between them.

Rendered Meshes

The whole mesh ended up looking like this:

Armature and Weight Painting

As you can see above the armature was added and the Mesh objects were parented to it with automatic wights. Because everything is a flat plane of which some are meant to overlap the others (like the closest front leg is in front of the torso and the back leg is behind it) parenting the armature with automatic weights meant that both front, middle, and rear mesh’s would get an equal measure of weight in parts. This all had to be manually painted.

Here the Torso was weighted across three bones and only the rear was affecting the rump (any leg meshes had to be removed from these vertex groups).

Weights had to be carefully graded otherwise warping of the line would result:

The weight is too strong a transition here.
It causes artifacts like this.
This is the resulting gradient changes in weight to get a correctly deforming line.

The other problem was that random single or lone groups of vertices would be weighted to a bone and not visible until you moved it in pose mode:
A few vertices on the chest were registered to the root bone. These all have to be manually removed.

The other interesting anomaly with the large rectangular plane meshes was that the weight would sometimes cause improper warping of the mesh which bent it round itself in places and showed up as black squares.

The foot vertex group covers all these vertices.
Which you cannot tell in edit mode when you select it with “show weights”.
During transform in animation these black marks show where the mesh does not warp properly.
The mesh is a mess.
It’s because the shin bone weight doesn’t go all the way to the edge.
It looks right in edit mode.
But if you use the vertex group to select all the vertices it should look like this (all the way to the edge).

These are pretty quick things to fix really but it took a while to work out what exactly was happening. It’s was still faster than individually making all the mesh components by hand to fit the image.

Probably a better workflow would be to make reduced simpler meshes that fit closer to the image but don’t have to slavishly man handle the vertices around the borders.

The Shading

UV Mapping is totally easy here but getting the material right was a bit tricky with the transparencies and images. This is the setup I used:

The Transparent Shader in Blender

That’s about it for getting everything set up in Blender. For more info on the animation steps and getting it into Unity see my other post about this. https://www.zuluonezero.net/2021/11/16/exporting-multiple-animations-from-blender-to-unity/

Preparing 2D Art for Animation

I’ve been doing some work on the 2D side of things in preparation for another game.

This has been the general workflow.

1. Make the assets in Clip Studio.

2. Pack the sprites with Free-Tex-Packer

3. Import the art into Blender, make a mesh for each sprite and UV map it.

4. Add the Armature bones.

5. Weight Paint

6. Animate.

7. Export from Blender as an *.fbx with the animations baked.

8. Import into Unity

9. Add new Materials and import the UV images into Unity.

10. Add the *.fbx imported asset into a scene.

11. Add an Animator Component and drag the animations from the prefab into it.

12. Set up Triggers and connections for the animations.

It’s a lot of work. Especially if you make a custom mesh for each piece of art. But I did all this as I really like animating in Blender (especially now that the Pose Library is functional and part of the Asset Browser). But to tell the truth I think I got better results using the Spline system in Unity 3D with much less work. There are trade offs and I’ll go through them below after more exposition on the workflow.

In this post I’ll go through the asset creation process in Clip Studio.

Making the Asset

The 2D game has a bunch of cute animals so I dug deep into the Disney Sketchbook by Ken Shue and pulled out Bambi for inspiration.

An early Disney sketch

Using this as a rough guide I drafted a few basic shapes for a “Deer” character which looked like this:

Rough Sketch for the 2D Asset

I started using Clip Studio last year in place of the Gimp. I’ve tried all sorts of painting programs and would choose Gimp over most of them (I will not spring for a paid version of Photoshop – it’s extortion!) but Clip Studio won me over with it’s brushes. It’s not expensive by comparison and I really like how it fits into the specific things I want out of an art program. I’ll often go back to Gimp for projects that require a lot of filters and image manipulation but for straight drawing on the PC Clip Studio is a good fit for me. I like how you can make custom tools that mimic your real life counterparts for a pencil or brush and find this program better at it than most (though Adobe Sketchbook runs a close second).

To start with I create a set of layers for the Inking of the artwork. One for each moving element in the final asset.

There is a pretty simple formula for this where each limb or piece gets a layer. But you have to have a general idea of what you are going to need in the final asset and what animation is required. There is no point doing a separate component if it’s not going to move or be seen in the final product. Trouble is a lot of this work is iterative and often you find that you have to go back and change something when it doesn’t look right. There needs to be an awareness of where pieces overlap and what lines are going to be warped by the armature bending or where a line needs to be extended behind a piece that might move and reveal where it ends by another layer.

The Inking Layers
This is how the inking layers sit on top of each other that shows where lines overlap or extend.

It’s really easy to see on the body and legs but even here on the pieces surrounding the head the lines that make up the ears and hair and neck all have to move independently but still look connected.

Once I’m done with the inking stage I add more layers for color. At this point the whole file gets saved as export copy and the layers are merged into one for each piece again and numbered in the order in which they will sit on the animation cell. I keep the older copy with the separate layers for everything and all the drafts so I can go back to it if I have to change something.
This is the whole asset complete and ready for export. Each layer is exported individually as a *.png. The file size of each one is 1024 x 1024 pixels with 600 dpi and a transparent background.
The *.png files are imported into the Texture Packer to minimize the material size in the final project. Each of these elements get’s UV mapped to a mesh in Blender but more of that later in the next post.

Start to finish this took a couple of days elapsed time as there is a lot of noodling about with formats, designs and what-not.

Next up I’ll go into the Blender workflow and preparing the art for animation with complex and simple meshes.