This is an embarrassing post. Some days are there to simply remind you that you don’t really know what you are doing. I spent two days trying to track down why my networking scripts were not working when I ran them in the editor. Turns out that the editor makes it’s own internal network stack when running a project which does not connect to the normal ethernet ports on your machine.
Top Tip ! Build you project and run it natively if you are using TCP/IP connections ! Don’t try and run it in the editor (not even for quick checks or small simple projects) it simply won’t work.
The really nice thing is after I got it all working I built a touch screen controller for Android phones that can be used as input in a Unity game.
I’m looking to see if there are any advantages to using Blender as a 2D Animation tool using meshes over Unity’s Spline Sprite based animation system. The differences between them at the effort and usability/flexibility layer are many and subtle. Hence the investigation. The two biggest differences for me is that 1. With the Blender animation option you are animating in Blender (which I like much more than animating in Unity). But the down side is that you have to import the animations into Unity and it’s pretty hard to modify once they are there. Which also means that it’s harder to adjust them to react to other actors, objects, and scene elements once you get it into the game. 2. With the Blender approach it’s a mesh in Unity not a Sprite so you can do all the transforms that mesh’s support. You can also light it as a mesh (the default Sprite Renderer cannot be lit). Being able to use light effects on a 2D image within the game is pretty huge for making it look pretty and making effects or plot devices (think lightning on a dark and stormy night). You can get light effects on Sprites in Unity if you swap out the default shader with another shared and with the Light Weight Render Pipeline in Unity (LWRP) but not every project will suit that. There are also Unity solutions that use custom shaders or use a similar mesh and material based solution (see further below for more on that).
Comparing Unity Sprites to Blender Meshes in Unity
The images directly below are taken from the Game Screen in Unity. The one on the left is a Sprite based Spline rendering while the one on the right is the Mesh based fbx from Blender. You can see the difference in quality between the Sprite on the left and the lossy baked images of the Mesh on the right – it’s not huge and can be improved with some tweaking (Bilinear Filter mode and upping the Ansio Level to 2 helped with the anti-aliasing and working with the material Metallic and Smoothness parameters also helped).
Sprite (left) and Mesh (right)Night Time lighting affects the Blender mesh image but not the Sprite based image.Lighting effects can be much more complex and creatively arranged to hit separate parts of the mesh.
As stated above you can drop an image onto an object in Unity as a material but it doesn’t light as well and is prone to shadowing. Use the Cutout and not the Transparent Rendering Mode in Unity or you get this shadow on the transparency. The below image shows a material with a standard shader with an image on a Unity 2D plane mesh but there is a shaded square around the outside that marks the image boundary.
Transparency Shader
The image below is the same sprite using a material with a standard shader and a cutout rendering mode (the diffuse sprite shader worked similarly). The top one is a normal sprite renderer with the custom material replacing the default-sprite material. The bottom one is a Unity 2D Plane with the custom material applied. Both tests look better than the quality of the Blender imported model and could be layered and they react with lighting in game.
So these are the alternatives to the process I’m describing below with Blender and they are good and valid options. I guess the only reason why I would choose to use the Blender animation workflow is because I hate doing this process in Unity’s Animator window. Add Property | drill down through the object | the child | the other child | the bone | the transform | and finally the tiny little plus sign that let’s me add one manipulation point! For a Deer Kick I had 88 different animation points – that’s a LOT of stupid clicking down through an object hierarchy to add Properties (I know you can hold down shift and add more than one property at a time but you still have to manually expand them all). The other alternative is to right click and add all properties for an object and then if you are patient enough you can remove the one’s you don’t use.
I do like the record feature that adds properties dynamically but these problems and that I find the interface finicky and too small made me look at Blender.
Importing the Images to Blender and Setting up the Workspace
Moving on to working in Blender with images and Meshes the basic process is this:
For every layer in the artwork of our animated character we exported a separate image file on a transparency. Each png file is imported into Blender as an empty image object (Add | Empty | Image) you could use a reference or background image but since all the parts might move I wanted to group them all under empties.
A Mesh is created for each image and either shaped to the outline of the image or left as a plane and weighted correctly (more on that later).
The image is baked into the UV of the mesh.
The components are then parented to an Armature with automatic weights.
The meshes are weight painted to correct the deforms.
Now it’s ready for animation.
The image objects are all placed at the same origin (0, 0, 0) and rotated 90 degrees on the ‘x’ Axis so they are visible in the viewport from the “front” view.
All the Deer components Frankenstein’d together into a whole The visibility of parts are toggled on and off so individual pieces can be worked on.
Making the Mesh’s
For each piece a mesh is made. I took two approaches here: 1. Model a plane mesh as closely as I could to the shape of the sprite. 1. Use a plain rectangular mesh and use weight painting to deform correctly.
To start with the modelling approach I started with an image and dragged a plane in edit mode over it as a wireframe. The origin of the plane was kept at 0, 0, 0 so all the pieces that were made had a common reference (same as all the images). Using basic mesh deform and subdivision I created a mesh that matched the image.
The foreleg Mesh
The method was a lot of work manually placing each vertex on the border of the image boundary. If the vertex is placed a little bit outside the image you get a white space on the final product and if you don’t come all the way to the edge you lose some of the black line and smooth finish (UV mapping is slightly out). Plus I found that if you have to warp the mesh too much for a sharp angle or awkward placing of the square tiling you get some minor defects along the line during animation.
Vertices placement
After about the fourth component I got a bit sick of manually moving around vertexes. So I took another approach of just using a rectangular mesh and relying on the transparency of the image to do all the work. This is much easier and faster but there were gotcha’s during adding the armature and weight painting. This rear leg below is just one big mesh subdivided into enough squares to give a decent deform without stretching or warping the black line during animation.
Venison
In Solid shading here is a comparison of the rear leg mesh and the front leg mesh.
Solid Mesh Planes
The image below is both meshes in Render mode (including the armature) and you really can’t tell the difference between them.
Rendered Meshes
The whole mesh ended up looking like this:
Armature and Weight Painting
As you can see above the armature was added and the Mesh objects were parented to it with automatic wights. Because everything is a flat plane of which some are meant to overlap the others (like the closest front leg is in front of the torso and the back leg is behind it) parenting the armature with automatic weights meant that both front, middle, and rear mesh’s would get an equal measure of weight in parts. This all had to be manually painted.
Here the Torso was weighted across three bones and only the rear was affecting the rump (any leg meshes had to be removed from these vertex groups).
Weights had to be carefully graded otherwise warping of the line would result:
The weight is too strong a transition here.It causes artifacts like this.This is the resulting gradient changes in weight to get a correctly deforming line.
The other problem was that random single or lone groups of vertices would be weighted to a bone and not visible until you moved it in pose mode: A few vertices on the chest were registered to the root bone. These all have to be manually removed.
The other interesting anomaly with the large rectangular plane meshes was that the weight would sometimes cause improper warping of the mesh which bent it round itself in places and showed up as black squares.
The foot vertex group covers all these vertices.Which you cannot tell in edit mode when you select it with “show weights”. During transform in animation these black marks show where the mesh does not warp properly.The mesh is a mess.It’s because the shin bone weight doesn’t go all the way to the edge.It looks right in edit mode.But if you use the vertex group to select all the vertices it should look like this (all the way to the edge).
These are pretty quick things to fix really but it took a while to work out what exactly was happening. It’s was still faster than individually making all the mesh components by hand to fit the image.
Probably a better workflow would be to make reduced simpler meshes that fit closer to the image but don’t have to slavishly man handle the vertices around the borders.
The Shading
UV Mapping is totally easy here but getting the material right was a bit tricky with the transparencies and images. This is the setup I used:
How to make use of a message bus in Unity. This is a good solution to decouple components and logically organise how your game runs. Instead of doing all the dragging in the editor of game objects and components into scripts the message bus works a bit like a proxy for the communication between scripts. This is good (makes complex things easier to manage – ie. no noodling of dependencies everywhere – and no heavy FindObject calls – plus I get to make a funny bus pun.
Events
When using Events your Objects become Subscribers and Publishers of actions. When you subscribe (or listen) to a particular event channel you get notified when something changes. These notifications go out to every object that is listening on the bus and each object script can respond to the message in their own way.
You can make this as complex as you like but it gets harder the more complex the data is that you are passing around on the bus. This example uses just four components and our events are nice and simple:
An enum that lists the Events on the Bus.
The YellowBus class that handles subscribtions and publishing (by using a dictionary of events)
A BigYellowBusController script that is attached to our bus game object and subscribes to all the “bus” events like starting, stopping at stops, taking on passengers, etc.
A PassengerController that is attached to the Bus Rider game objects (the coloured circles) which subscribes and reacts to rider type events like calling the bus, getting on and off, and ringing the bell when you want to get off.
EventsOnTheBus
This is just a simple collection of named constants that are meaningful for our events.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace AdvancedCoding.YellowBus
{
public enum EventsOnTheBus
{
CALL_BUS, ALL_ABOARD, START_ENGINE, NEXT_STOP_PLEASE, STOP_ENGINE, EMERGENCY_STOP, END_O_THE_LINE
}
}
It exposes the Subscribe, Unsubscribe, and Publish methods.
Notice the using UnityEngine.Events directive at the top.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Events;
namespace AdvancedCoding.YellowBus
{
public class YellowBus
{
private static readonly IDictionary<EventsOnTheBus, UnityEvent>
Events = new Dictionary<EventsOnTheBus, UnityEvent>();
public static void Subscribe(EventsOnTheBus eventType, UnityAction listener)
{
UnityEvent thisEvent;
if (Events.TryGetValue(eventType, out thisEvent))
{
thisEvent.AddListener(listener);
}
else
{
thisEvent = new UnityEvent();
thisEvent.AddListener(listener);
Events.Add(eventType, thisEvent);
}
}
public static void Unsubscribe(EventsOnTheBus type, UnityAction listener)
{
UnityEvent thisEvent;
if (Events.TryGetValue(type, out thisEvent))
{
thisEvent.RemoveListener(listener);
}
}
public static void Publish(EventsOnTheBus type)
{
UnityEvent thisEvent;
if (Events.TryGetValue(type, out thisEvent))
{
thisEvent.Invoke();
}
}
}
}
BigYellowBusController
The controller on the bus is a subscriber to the events that it needs to react to and defines private methods of handling those events when they are broadcast. You can have any number of subscribers listening in to your events. In this example we only got two: The YellowBus and the Rider.
The Rider Controller does pretty much the same as the Big Yellow Bus Controller but only listens to those events that relate to the riders (some of which it shares with the Big Yellow Bus Controller). Now here is where it gets interesting… look at the NEXT_STOP_PLEASE parts. Start with the OnGui() method below on line 75 where a button press will publish the NEXT_STOP_PLEASE event. In the OnEnable() method our Subscription code calls the PressTheBell() private method (line 17) that let’s our game object handle the call and produce the text message. BUT the YellowBus Controller also subscribes to this event and handles it with the NeedToGetOff() method so that it knows it has to stop and let the passenger off (line 21, 59 and 95 in the BigYellowBusController script above). AAANNDDDD responds by triggering the STOP_ENGINE event and the END_O_THE_LINE event. See how they can chain together into larger behaviours and complex interactions between different game objects without actually entangling them at the script reference level.
Also be aware that on these scripts there is a subscription in OnEnable() and an Unsubscribe in OnDisable() (so we are not holding on to the listener when the game object disappears) the thing to note here is that we are listening to our own events. For example the RiderController triggers the CALL_BUS event and both our own script and the BigYellowBusController script react to it.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace AdvancedCoding.YellowBus
{
public class RiderController : MonoBehaviour
{
private bool _lightMeGUIup;
private string _message;
private GameObject _bus;
void OnEnable()
{
YellowBus.Subscribe(EventsOnTheBus.CALL_BUS, HopOnTheBus);
YellowBus.Subscribe(EventsOnTheBus.NEXT_STOP_PLEASE, PressTheBell);
YellowBus.Subscribe(EventsOnTheBus.ALL_ABOARD, JumpOn);
YellowBus.Subscribe(EventsOnTheBus.END_O_THE_LINE, JumpOff);
}
void OnDisable()
{
YellowBus.Unsubscribe(EventsOnTheBus.CALL_BUS, HopOnTheBus);
YellowBus.Unsubscribe(EventsOnTheBus.NEXT_STOP_PLEASE, PressTheBell);
YellowBus.Unsubscribe(EventsOnTheBus.ALL_ABOARD, JumpOn);
YellowBus.Unsubscribe(EventsOnTheBus.END_O_THE_LINE, JumpOff);
}
private void Start()
{
_bus = GameObject.Find("BigYellowBus");
}
private void PressTheBell()
{
_message = "I need to get off the bus !!";
_lightMeGUIup = true;
}
private void HopOnTheBus()
{
_message = "I need to get on the bus please!!";
_lightMeGUIup = true;
}
private void JumpOn()
{
_message = "Hi Driver!";
transform.parent = _bus.transform;
transform.position = new Vector3(0f, 0.2f, 0f);
}
private void JumpOff()
{
_message = "Thanks a lot!";
transform.parent = null;
transform.position = new Vector3(0f, -3f, 0f);
}
private void OnGUI()
{
if (GUI.Button(new Rect(160, 10, 100, 30), "Call Bus"))
{
YellowBus.Publish(EventsOnTheBus.CALL_BUS);
}
if (GUI.Button(new Rect(160, 50, 100, 30), "Bell Press"))
{
YellowBus.Publish(EventsOnTheBus.NEXT_STOP_PLEASE);
}
if (_lightMeGUIup)
{
GUI.color = Color.blue;
GUI.Label(new Rect(300, 40, 200, 20), "..." + _message);
StartCoroutine(DropGUI());
}
}
private IEnumerator DropGUI()
{
yield return new WaitForSeconds(2f);
_lightMeGUIup = false;
}
}
}
The other thing to note is that this message bus is like a party line – all messages are broadcast to anyone listening. All the passengers respond to the NEXT_STOP_PLEASE event as they all share the same code…but of course they could implement different controller scripts per player and all handle this event differently. It’s pretty powerful stuff.
Playtime in Bus Controller Land !
The code above was modified from an example in Game Development Patterns with Unity 2021: Explore practical game development using software design patterns and best practices in Unity and C#, 2nd Edition by David Baron – I recommend buying a copy it is the best book of this type that I have read – take yourself from a beginner to an intermediate Unity programmer. 🙂
This is one of those workflows that is always a bit fiddly to get right so I’ve documented how to do it here in case I forget! One of the downsides to being a solo developer is that your skillset is always being stretched by the available time so you can end up getting proficient in once aspect of game building and then by the time you get back to that phase you forget everything you’ve learned and all the tricks of efficiency and process. Also, in case someone else needs it.
This is what we are aiming for in Unity. An imported mesh with multiple animations being called independently.
Blender Workflow for Saving the Animations
Start with a new project. Select everything (the default cube and lamp) x –> delete.
In this case I’ve imported an existing fbx of a hand with supporting armature ready for animation. I won’t go over the modelling or rigging procedure there is plenty of help with that out there – but if you need it I would recommend the Riven Phoenix courses because they are so dense (these tutorials are no quick start or tricks videos but deep deep dives into the process and reasons behind it and how stuff works in Blender at a very technical level).
This is how I layout Blender for Animation with a dual screen front and right view with the Timeline below
Get your animation window set up and make sure the timeline is available at the bottom.
Making a Pose Library
In the Outliner select the Armature and make a Pose Library. We can use this to set a few basic poses to make the animation process run a little easier. The poses will be major key frames that we can interpolate between.
It’s not the best workflow but in the tech preview for upcoming Blender versions is an enhanced workflow for the animation process which looks really exciting – google it.
Make a Pose Library
Add the default pose as the first item. Go to Pose Mode. Get the model into your default position and save this pose. (Important – this will be the pose that the model is exported as by default so try and make it your idle or standing pose).
Save several other poses (make sure you save all the bones you want the pose to effect – usually this is all the bones). You can overwrite poses if you get it wrong.
Also, when a pose is added and a pose marker is created the whole keying set is used to determine which bones to key. But if any bones are selected, only keyframes for those bones are added, otherwise all bones in the keying set are keyed (this is why I usually have all the bones selected).
I’ve made several poses and saved them
It’s a good idea to set and select the poses a few times for each one to make sure you got it right. I’ve found that sometimes it’s a bit glitchy or I do something a little bit wrong and it doesn’t save properly (actually it’s probably not glitchy it’s probably just me).
That Book icon with the Question Mark is useful when you have all your poses completed. Pose libraries are saved to Actions. They are not generally used as actions, but can be converted to and from them. If you use this icon to “sanitize” the pose library it drops all the poses down to an Action with one pose per frame. So you can go into the NLA Editor window and select this track and sweep/scrub through them. Maybe this is useful as a clip in Unity if you want to split it up using the timing editor and make custom animations in Unity (never tried it).
Making the Animations
Go to Dope Sheet – and switch to the Action Editor View.
Action Editor
Make the animation (ie. start on the first frame – Assign the pose from the library – Shift + I save rotation and location. Go to last frame – assign the next pose – Shift + I and save again).
In the Timeline make sure you are on the beginning frame. Set the pose you want to move from (first keyframe) and save the required parameters.
Shift – I Insert Location and Rotation (make sure the Armature is Selected)
Start with the first poseThe Dope Sheet
Move to the next frame at a suitable scale and change the pose to your ending pose in the editor. Save the Location and Rotation parameters (if that’s all that’s changed).
Add the second poseSaved Pose in the Dope Sheet
Pushing the Animation down the Action Stack
Once you are done hit the “Push Down” button. This is the magic button.
Magic Push Down Button
Next move over the the Nonlinear Animation Window.
The NLA Window
Your animations get made as Actions in the Non-Linear Action Editor Window: NlaTrack, NlaTrack.001, etc.
In the NLA Editor you can click the Star next to the NLA Track (rename them to make it friendlier) to scrub through the track. Make sure you got the right animation under the right name etc.
After hit Push Down after each animation is finished it appears as an NLA Track in the NLA Editor
I make a few more animations and hey presto. Each one of those NlaTracks is an animation that we can use in Unity. Also the PoseLib track is marked there with orange lines – one for each pose on a frame which is a good reference track if you need it.
The Animations Stacked up in the NLA ready for Export with the *.fbx
Export from Blender
These are the settings I use to export. It’s safer to manually select only the Armature and the Mesh here.
It’s useful to have Forward as -Z Forward for Unity.
Blender Export Settings
Import Into Unity
This is what it looks like when I import the .fbx into Unity.
The Animation Tab of the Asset (on import)
The animations come out as duplicates but you only need one set. Work out which one’s you want and delete the others using the minus button when you import. This bit can be a bit fiddly and sometimes I’ve had to do the process of exporting and importing a couple of times to get it to work. Sometimes what works is to drag and drop all your animations NLA Tracks into one track in the NLA Editor and select it with the star before exporting. Sometimes it works – sometimes not. Not sure why.
After that I drag the model into the scene and add an animation controller. Then you can just drag the animations from the imported model into the Animator window like below and set up transitions as you see fit. Below I’ve made them all come from an Any State and added some Triggers so I can play with them in the Window for Testing.
You can see the result of that testing in the .gif at the top of the article. (Apologies for the quality of that .gif it seems to have picked up some ghosting artifacts around the fingers – promise it looks awesome on the screen).
The Animator Controller
So there are a few limitations to this workflow that need to be mentioned. Some people like to save their whole .blend file into their Unity Assets so they can make updates on the fly while modelling etc. This won’t work with that set up. The animations need to be saved down to a *.fbx file so that Unity can find them when it’s imported as an asset. So if you like to have access to your .blend and use animations like this you need to export the *.fbx and import it again and have both .blend and .fbx in your asset folders which can be a bit confusing and messy and makes for a bigger project.
I’ve been in Beta Testing for a new game I’m about to release on the Google Play Store (the game is called Endless Elevator). I kept having Native Crashes on specific Android platforms in all my builds in the Pre-Launch Reports. Native Crashes can be terrible to work through if you get unlucky so I was a bit worried and figured I’d just have to leave it like it was and release with errors! But being a bit stubborn I threw a few days into sorting through it and am very glad I did. Working through the problem highlighted some things I didn’t know about Android Video support and was an interesting exercise in troubleshooting. So here is the method I followed and the resolution to the problem.
In each case it was always the armeabi-v7a package that was causing the issues. (I split my build into two APK’s for arm64 and armeabi to make it a smaller installation size – I haven’t gone the android bundle path yet).
These are some of my base Beta builds and in most cases there were 4 errors relating to specific platforms.
The Pre-Launch tests are run on a variety of Android platforms but usually they will include these four below in some form or other and my build kept crashing with a Native Error on each of them.
The usual suspects
When I looked at each of them in turn and played the video of the interactive session the fail point always seemed to be about the time when I had a full screen projected video playing or about to play. The video is used as an introduction and tutorial to the game so it was pretty important for me to get it working.
The drill down screen of the crash report where you can see the video of the session and get access to the logs.
I downloaded all the Logcat’s from the console above and looked for any errors or crash reports.
In each case I found this line (which was a bit of a dead giveaway):
——— beginning of crash
A half dozen lines above the likely culprit was writ large:
07-23 04:00:47.862: W/MediaAnalyticsItem(9345): Unable to record: (codec:0:-1:-11:0:3:android.media.mediacodec.mime=audio/ac3:android.media.mediacodec.mode=audio:android.media.mediacodec.encoder=0:) [forcenew=0] 07-23 04:00:47.890: W/Unity(9345): AndroidVideoMedia: Could not create decoder for mime type audio/ac3. 07-23 04:00:47.890: W/Unity(9345): (Filename: Line: 2177) 07-23 04:00:47.906: I/Robo(9288): No foreign elements detected, falling back to original ScreenState. 07-23 04:00:47.910: I/Robo-HyperMultiGraph(9288): New Screen: Optional.of(ScreenNode {Id=5, PackageName=com.ZuluOneZero.EndlessElevator, ActivityName=Optional.of(com.unity3d.player.UnityPlayerActivity)}) 07-23 04:00:47.913: E/Unity(9345): Could not allocate memory: System out of memory! 07-23 04:00:47.913: E/Unity(9345): Trying to allocate: 4294705156B with 16 alignment. MemoryLabel: Audio 07-23 04:00:47.913: E/Unity(9345): Allocation happened at: Line:70 in 07-23 04:00:47.913: E/Unity(9345): Memory overview
A bit of googling about led me to believe that as per the error message above the audio codec used in the video was a problem. The AC3 codec is an Audio format that’s used in my MP4 Video. I’d never given it much thought but this format is not supported across all the Android platforms (one of the problems of Android development is that there is so many different platforms out there).
The Video Editing Software that I normally use is called OpenShotVideo. It’s fantastically good for the price (free) and is easy to use and powerful enough for my meagre needs. Turns out the default audio codec used is AC3 (there is probably a way to modify this with OpenShotVideo but I wasn’t in the mood to troubleshoot someone else’s software). I really hadn’t given the audio codec part of the MP4 a second thought.
This is the Export Panel from OpenShotVideo where I confirmed that the Codec was indeed ac3.
While I was doing all this work and after I worked out that the audio codec in the Video was the problem I had a look at the video settings in Unity. I found that there was already a built in transcoder that I’d never noticed right there in the Unity Video Asset Import screen.
Transcode !
That’s pretty cool! Unity has already solved all my problems before I even knew I had them. So I hit the Transcode tick box and waited for twenty minutes while it went to work transcoding. That wait time should have been a bit of a warning. I did the build and uploaded the new apks to the Google Developer Console but while doing that I found that my build size had jumped almost 17 MB!
This was my size before the transcoding:
And afterwards:
A quick look at the Editor.Log confirmed that the transcoding process had made my lovely low quality 7 MB Movie over 20 MB:
Used Assets and files from the Resources folder, sorted by uncompressed size: 22.1 mb 6.8% Assets/Art/IntroMovePlusFoyerClippedSlowLow.mp4
After importing this into my project and rebuilding again I was left with a similar package size and no Native Crashes. Hooray. I’m going to release this Beta Build to Production soon so getting over this little hurdle feels like a huge V for Victory. Huzzah.
For Endless Elevator we wanted to do an Introduction Scene for the game. The gameplay, as the name suggests, consists of climbing endless elevators and escalators. The player navigates floor after floor in the bad guys luxury hotel and tries to climb as high as possible while defeating the bad guys. It’s a 2.5D top down scroll mechanism and clipped into the limits of the building. Just dumping the player into the start of the game in the levels of the building felt a little weird as there was no context to where you were in the story. Hence the need for an opening shot to set the scene and to literally drop the player into the game.
Our hero flies into the enemies’ oasis headquarters in a helicopter and storms into the foyer of their luxury hotel. We mocked up a scene and a helicopter in Blender and imported the assets into the main foyer scene of the game. I hadn’t used Unity’s Cinemachine before but wanted to try it out. Previously, in other projects, we had captured gameplay for cut-scenes using external software and video editing suites which was OK but the experience with Cinemachine and Unity Recorder was way smoother. It was much easier to work with and much better quality avi files. Plus we didn’t have to do custom scripts for switching cameras and panning. It was so easy it kind of made me excited about movie making with Unity but you know I don’t need another distraction.
To start working with Cinemachine and Unity Recorder you can download them using the Package Manager. Unity Recorder has only recently been added (it’s still also on the Asset Store) so you need to enable the “Preview Packages” selection from the Advanced menu in the Package Manager.
Below is a screen shot of my scene in Unity. You can see the main building in green and the surrounding buildings and water in the bad guys oasis HQ. The helicopter is just visible down where the camera sight lines join and on the left in the Hierarchy you can see my Timeline component and my two vcams (Virtual Cameras).
The Timeline is where all the magic happens and was very easy to set up.
First we did a few animations on the helicopter to fly it in to the building and make the rotor spin. Then we added an animation to move the character from the helicopter into the building (which looks terrible but remember this is a quick mock up)
The Helicopter Animation
We dragged this animation into a new Animation track on the Timeline object (right click and Add Animation Track). Then we created two Virtual Cameras in the scene. One Camera (vCam1) was set using the properties in the Inspector to automatically Loot At and Follow the helicopter. This means that where ever we flew the Helicopter the camera would follow it round from behind at a set distance and keep it in the frame automatically. This was really fun when we had it under manual control for testing and worked well when under the control of the Animator. We used a preset for a bit of camera jitter and shake to mimic a real camera man in a second helicopter.
The second Camera (vCam2) was stationary at the building site but set to Follow (ie. Look At) the Main Character. We timed the cut from one camera to the other so that once the helicopter landed it would pass control to the second camera and seamlessly start focusing on the Player. This was so easy it was ridiculous. The Camera objects were added to the Timeline and the split where we cut from one camera to the next is clearly visible in the screenshot below (two triangles). The first time I ran it and that view cut automatically from one vcam to the other gave me an enormous sense of satisfaction like I’d just been named a modern day Hitchcock.
The Timeline Editor Window
To record what we had done as an AVI we opened the Recorder Window:
Opening the Recorder Window.
We used the default settings and triggered the Start of the Recording with the start of the animation by having it in the Timeline. The Capture target was the Game View (you can also get the other elements of the Editor if you need it). The Output Resolution is interesting as you can use the Size of the Editor Game window on your screen or set it to standard default movie formats.
The Recorder Window
That’s about it. We hit Play in the Editor and the Timeline starts the Recording of the AVI and synchronises the Animations and the Camera movement automatically. Once we are done we are left with a good quality moving image of our game screen that we will use as the cut-scene to drop the player into the start of our game. Obviously what we got here is just a “screen test” but I was really happy with what we could achieve in just a few hours and with so little complexity.
To start with I wanted to do a general investigation into Wwise the integrated audio package for Unity by AudioKinetic. When I started working through it I figured it would be more interesting to look at Wwise in comparison to Unity’s own audio API and mixer components which have been around since Unity 5.
To do that I’m going to compare a game in three different builds. Build one is it’s original state with simple scripts that run an AudioSource.Play() method. The Second build I will add another layer of complexity by using the Unity built in Mixer and see if there are any differences or advantages. Lastly I’ll redo the project with the Wwise API and investigate how that impacts build size and project complexity and weigh it up against the previous two builds. Mostly I’m looking for difference in performance between the three builds, build size and complexity, and weighing that up against ease of implementation and flexibility.
I refreshed an old project called “MusicVisualiser”that I started for my Five Games in Ten Weeks Challenge. The game is like a singing solar system. There is a bunch of “planets” in the night sky that play a set piece of music when clicked. It’s a really simple concept and project but I think it will work for this comparison as the parameters can be limited to just a few audio tracks but we can play with spacing and roll-off and other advanced audio features.
Let’s have a look at the game first.
These “planets” are simple native Unity sphere meshes with an Audio Source component and a particle system that’s triggered when it’s clicked. You can see in the Audio Source that we are not using a Mixer for Output and all the Audio sources compete for resources and play at their default volume and priority.
The PlayMe script just takes in the AudioSource and plays it:
public AudioSource my_sound;
if (Input.GetMouseButtonDown(0))
{
RaycastHit hitInfo;
target = GetClickedObject(out hitInfo);
if (target != null && target.name == my_name)
{
_mouseState = true;
screenSpace = Camera.main.WorldToScreenPoint(target.transform.position);
offset = target.transform.position - Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, screenSpace.z));
my_sound.Play(); // This is the Audio Component!
var expl1 = GetComponent<ParticleSystem>();
expl1.Play();
}
}
Pretty simple right. This is what the project looks like in the Profiler when it’s running and being actively engaged with. At that point we are looking at two Audio Sources are playing:
This is the build size from the Editor Log with our Audio Files broken out:
Used Assets and files from the Resources folder, sorted by uncompressed size: 204.3 kb 0.5% Assets/SomethingLurks_AAS.wav 164.5 kb 0.4% Assets/Step2Down_AAS.wav 136.9 kb 0.3% Assets/Underwater_AAS.wav 41.8 kb 0.1% Assets/M1_M12_37_ThumPiano_Aflat1.wav
Unity Audio with Mixer
Now we add in the Mixer component to the project:
Then add a couple of Channels to the Mixer to split the audio between left and right. Then the Audio Sources are dropped into one or another of the Mixer Channels:
Adding the Mixer as the Output source
Next for bit more interest I added some effects in the Mixer. Here is where we see the advantages of using the Unity Mixer. Sounds can be manipulated in complex ways and the Audio Output chain be defined with presets and levels etc.
If we have a look at our Profiler while running with the new component we cannot really see any great differences. The ‘Others’ section of the CPU Usage is a bit higher and the Garbage Collector in the Memory is pumping regularly but the Audio Stats look pretty much unchanged:
Profiler Mixer
Mind you this is a fairly low utilising game so we might get wildly different stats if we were really putting the system under the pump but I’m not performance testing here just comparing run states between the two builds.
Next if we build the game and have a look at the Editor Log the only thing that’s changed here is that the “Other Assets” size is a KB higher (Complete size has not been changed):
Next we are going to add Wwise to the Project. This is the basic workflow. In the Wwise Launcher we register our project and on the first tab we are presented with three Hierarchies.
Project Audio Explorer in Wwise
The Master-Mixer Hierarchy – does what it says. The Actor-Mixor Hierarchy – where most of your game audio develops (use the SoundSFX defaults). Interactive Music Hierarchy – other stuff we won’t get into.
Events Tab
The next tab along is the events tab where you link your audio to game events. You can define your event here (use the default work unit).
Once you got the event there you can associate the event with the audio in the Action List.
SoundBank Tab – this is the bit that get’s imported into your project.
Next you generate a SoundBank with Wwise that includes your audio and the code for the API calls to trigger sounds. You export that SoundBank into your game engine and link up the calls in your code.
To Get Started with Wwise
To get started make an account with Audiokinetic and download the Wwise Launcher. The Integration package for Unity can be downloaded and installed directly from the WWise Launcher.
In the Wwise Launcher there is a WWISE tab that you can install and start the application from. Once you open it up you need to register your project within the launcher so Wwise can track you 🙂 ( click on the key icon next to your Wwise project and select ‘Register your Project to obtain a License’). Wise will run in Trial mode which restricts the SoundBank content to 200 media assets and cannot be used for Commercial purposes. Pricing for licensing is on their site but this is not a sales piece so if you want it you can look it up.
There are a bunch of plugins by Audiokinetic and their partners available and also Community offerings like AudioRain a dedicated rain synth with 60 procedurally generated presets for rain. What’s not to love about that!
There is a Wwise SDK for authoring your own plugins and a Wwise API which allows you to integrate into any engine, tool or application.
Audiokinetic do certifications that covers audio integration workflows, mixing virtual soundscapes, working with sound triggering systems, and performance optimisation : https://www.audiokinetic.com/learn/certifications/
Basically in Wwise you let the Launcher do all the setting up for you. You will install the Wwise binaries from here and manage your platform versions. Projects can be integrated here and if you don’t have the necessary plugins installed the Wwise Launcher will install them for you.
Integrating the MusicVisualiser project with Wwise.This is how big the Wwise Integration packages and binaries are.Applying…Done!
That’s basically it for the set up of Wwise and Integration with your Project. Next up we will have a look at what this has done to the Unity Console.
Wwise in Unity
First thing we see is a bunch of errors that can be safely ignored. As we did not perform any configuration of our project in Wwise with audio files and events there was no SoundBank to generate yet.
Unity – Initial Errors can be ignored if you have not generated your SoundBank yet.
In the Unity Console we have a new tab in our editor. The Wwise Picker Tab contains all the elements of the Wwise project that have been imported with the project integration. There is also a Wwise Global Game Object in the Unity Hierarchy and all the Wwise folders in the Assets folder.
Unity EditorThe WwiseGlobal Game Object
Under the Component pull down there is a whole slew of Ak (AudioKinetic) options.
Wwise Components.Wwise Configuration Settings.
I know there has been a lot of “show and tell” in this post but I’m going to keep going and show the process of importing the audio into the Wwise Project, creating Events, and Generating the SoundBank.
Working in Wwise
In the Wwise Project Explorer I right click on the Default Work Unit and import the audio files that were part of my project. (I’ve stripped the raw files out of my project for now and removed all the Mixer components and etc.).
Importing Audio Files into the Wwise Project.This is what the files look like.Right click on the file to create a new Event (which can be called in the Unity code).Here is the event created for “Play”.And all my “Play” events.
Finally a SoundBank is generated from which the Unity project can access the sound files through the AudioKinetic API.
Generating a SoundBank
Wwise Audio in Unity
When we go back to our Unity Editor and Refresh the Project and Generate SoudBanks we are presented with the following in the Wwise Picker. We can now access these files and and drag them on to our game objects directly. It’s that simple. Drag a sound from the Picker onto a Game Object and it automagically creates a component that is immediately accessible from within the editor.
The new audio imported into the Wwise Picker.
Below the Play_Underwater_AAS event and audio file has been added to the Sphere Game Object.
The Trigger, Actions, and Callbacks can all be configured and access through the API. In my case I easily integrated the functionality I wanted with only one line change to my attached PlayMe.cs script that we looked at above. So now instead of my audio coming from the AudioSource component referenced by my_sound the audio is played by the AKSoundEngine.PostEvent.
Actually getting Wwise installed and set up and integrated with my Project was very very easy but not without bumps. It takes a very long time for packages to download and I had a bit of trouble upgrading my Wwise Launcher from an old version (it got stuck! and I had to remove it by hand and re-install). When I did have issues I got some very excellent help from AudioKinetic and after logging a case was emailed directly by a real person (which honestly was so surprising and wonderful to get that kind of support from a company when I’m on a trial license with no formal support agreement or rights).
So lets have a look at the differences in performance and package size. The first thing you notice with the Profiler below is that there is very little difference in performance but we can no longer see our audio stats as it’s been abstracted away from the Unity Engine. The Graph still shows the resources being used by Audio and the Total Audio CPU seems to be up to a third lower than the native Unity Audio statistics. It looks like it’s being clamped at just over 1.2. MB instead of regular peaks over 3 MB.
Profiler with Wwise Audio running.
The Build Report is only a couple of MB larger for the total project size:
Basically a 2 MB difference! The Sounds have been extracted away as a file in the Build Report and we assume they are now part of “Other Assets” above.
I’m kinda blown away by how how little additional file size there is to the build considering the additional libraries code and available complexity that Wwise adds. There is literally a plethora of options and effects that we can play with in the Wwise package. It’s a bit like the excitement I got after the install of my first real Audio DAW. The scope is part boggling and part fantastical wonder at where we can go next. (Audio does get me unusually stimulated but that’s to be expected and tempered accordingly).
The questions I wanted to answer with this whole experiment was 1. Would including an audio middleware like Wwise make my Project more complex and difficult to manage? 2. Would the added Package make my build much larger? and 3. Would the performance of the Audio package be as good as the simple Unity Audio API? The answers are: No. No, and Yes. So I’m pretty happy with that and if the cost point of using the licensed version of Wwise is balanced out against the advantages of using it in the total cost of the Project then I would most definitely one hundred percent go for it.
In our game currently under development called Endless Elevator I decided to add a new feature of more depth. The game is 2.5D and mostly sits in a very shallow Z axis, a limited X axis, and an endless Y axis. As the name suggests your character is inside a never ending building trying to kill or avoid the enemies by escaping up elevators and escalators. The mock-up of a level below in the Scene view of Unity gives you the picture.
Endless Elevator Opening Level Spawn
The player view is a much smaller slice of this level…about this much:
Roughly how much the camera views
I had the idea that I wanted to extend this playing field into a deeper third dimension where the character could walk down a hallway, away from the camera, and seemingly deeper into the building. Instead of the camera following the character down the hall on the Z axis (as it follows him on the X and Y) I wanted to pan around the edge of the building and take up the character again on the new parallel so that it looks like the camera is turning Ninety degrees and that we are looking at a new side of the building.
Have a look at the .gif below and I’ll try and explain that better. The top half of the image is the Scene view and the bottom half is what the camera (and the player) sees in the game. In the top half Scene view you can see my character in green and highlighted by the handler arrows. He scoots around a bit and then disappears down the hall. When he gets to a set depth it triggers the camera to move in the game window below and you will see the levels reconfigure into a new building face (note the elevators and escalators and doors will be in different positions).
On the bottom half of the .gif that shows what the player sees it looks like once the character disappears down the hall the camera pans right, looks at the edge wall of the building as it goes around the corner in a Ninety degree turn, and then follows the character on the new level again.
(Watch the top half for a bit then the bottom half)
You can see in the Scene view we are not really moving anything with the building. It just recreates the levels. The camera is doing all the work. It’s not perfect yet and without any background around the building to relate the movement to it’s a bit hard to tell if we are turning Ninety or One Hundred and Eighty degrees on that camera flip but it’s getting there.
It took a while to work out how to do this and I tried several different methods but this is the basic logic of the camera move script that is attached to the character.
The movement of the camera is triggered when the character moves past a certain point on the Z axis.
Stop the regular character and camera movement functions by disabling that scripted behaviour on each object.
I set up an empty game object called the CameraLookAtPoint that hovers at the end of the building on the far Right of the X axis.
Pan the camera Right toward the CameraLookAtPoint.
When the camera gets to within a certain distance to the LookAtPoint it starts to rotate towards it.
The camera moves around the LookAtPoint so that it faces that end wall of the building as it turns.
At this point while the only thing the camera can see is that blank edge wall of the building I destroy the old level we just walked out of down the hall and create a new randomly generated level.
This is the great illusion! The camera is then moved instantly to the far Left of the building (the opposite end) and it appears as if we have just turned a corner.
Lastly the camera picks up the character again and we hand control back to the normal camera and character movement scripts.
This is where the CameraLookAtPoint sits that the camera rotates around Ninety degrees as it gets to the edge of the building.
I’ll post the whole script below with comments but I’ll walk through the interesting bits here.
To start off with I needed to grab references to several external elements like the camera, the level instantiating script, the character controller script, and the characters Rigidbody. (I needed the rb because when the levels were destroyed and recreated gravity would take hold between them and the character would fall into the endless abyss!)
I had to generate a series of “if” conditionals and Boolean flags to control the movements of the camera. This was surprisingly hard to get right. It’s often not intuitive when you are in the Update function what the looping iterations will do with your code but this allowed me to slow things down and get control back.
The calls to the instantiateScene1 script were needed to copy variables used there in the main flow of the game to track what level we were on and how high up the building we had climbed. That way I could decouple that mechanism from this one and happily destroy levels and recreate them without interrupting the flow of the rest of the game.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class SpinLevelDownHall : MonoBehaviour {
public GameObject cameraLookAtPoint;
public bool triggered;
public bool cleanAndRebuild;
public bool moveRightEnd;
public bool moveRightEndBack;
public bool moveLeftEnd;
public bool movebacktoCharacter;
public int levelHolder;
public float cleanUpHeightHolder;
public bool firstRunHolder;
public Vector3 target_camera_Position;
private GameObject the_camera;
private RigidbodyCharacter move_script;
private CameraFollow cameraFollow_script;
private InstantiateScene1 instantiate_script;
private Rigidbody rb;
// Use this for initialization
void Start () {
the_camera = GameObject.FindGameObjectWithTag("MainCamera");
move_script = GetComponent();
cameraFollow_script = the_camera.GetComponent();
instantiate_script = GetComponent();
rb = GetComponent();
}
// Update is called once per frame
void Update () {
if (transform.position.z > 15)
{
triggered = true;
moveRightEnd = true;
cleanAndRebuild = false;
if (triggered)
{
// Stop the character and the camera moving
move_script.enabled = false;
cameraFollow_script.enabled = false;
if (moveRightEnd)
{
// Set the target camera position on the x axis at the far right of the building
Vector3 target_camera_Position = new Vector3(78f, the_camera.transform.position.y, the_camera.transform.position.z);
// Set the camera look at point on the y axis (so it's on the same level as the player)
cameraLookAtPoint.transform.position = new Vector3(cameraLookAtPoint.transform.position.x, transform.position.y + 4f, cameraLookAtPoint.transform.position.z);
// start moving the camera
the_camera.transform.position = Vector3.MoveTowards(the_camera.transform.position, target_camera_Position, 50 * Time.deltaTime);
// When you get close to the end start swinging the camera around to look at the wall
if (the_camera.transform.position.x > cameraLookAtPoint.transform.position.x - 10)
{ the_camera.transform.LookAt(cameraLookAtPoint.transform);
}
// When you get really close to the first position move the camera beyond the wall to the side
if (the_camera.transform.position.x > target_camera_Position.x - 0.5f)
{
target_camera_Position = new Vector3(78f, the_camera.transform.position.y, 4f); // 4f is perfect when the camera is at -90 deg
moveRightEnd = false;
moveRightEndBack = true;
cleanAndRebuild = true;
if (moveRightEndBack)
{
the_camera.transform.position = Vector3.MoveTowards(the_camera.transform.position, target_camera_Position, 50 * Time.deltaTime);
// When you get really REALLY close to the second position move the camera to the negative X Axis side
if (the_camera.transform.position.z > target_camera_Position.z - 0.2f)
{
moveLeftEnd = true;
if (moveLeftEnd)
{
target_camera_Position = new Vector3(-78f, the_camera.transform.position.y, 4f); // The other side
the_camera.transform.position = target_camera_Position; // snap move the camera
the_camera.transform.LookAt(cameraLookAtPoint.transform);
moveRightEndBack = false;
moveLeftEnd = false;
movebacktoCharacter = true;
}
}
}
}
}
if (cleanAndRebuild)
{
// Call cleanup on everything
rb.useGravity = false; // so the character doesn't fall through the floor
//cleanUp;
cleanUpHeightHolder = instantiate_script.floorCntr;
cleanUpHeightHolder = cleanUpHeightHolder * 8; // so it cleans up all the floors
firstRunHolder = instantiate_script.firstRun;
instantiate_script.cleanUp(cleanUpHeightHolder, firstRunHolder); // cleanup height is usually two levels below the character - we are raising it to two levels above him
//then make three levels
levelHolder = Mathf.RoundToInt(instantiate_script.player_level);
instantiate_script.makeLevel(levelHolder);
instantiate_script.makeLevel(levelHolder + 8);
instantiate_script.makeLevel(levelHolder + 16);
cleanAndRebuild = false;
}
triggered = false; // makes it only run once
}
}
if (movebacktoCharacter)
{
// new position is back to the character
// Start breaking and making your new levels in here
// first move the character
transform.position = new Vector3(transform.position.x, transform.position.y, -4);
target_camera_Position = new Vector3(transform.position.x, transform.position.y + 4.9f, -20.51f); // the starting camera position
the_camera.transform.position = Vector3.MoveTowards(the_camera.transform.position, target_camera_Position, 50 * Time.deltaTime);
the_camera.transform.LookAt(transform);
// Once again when you get close to your original camera position disable and enable normal camera tracking again
if (the_camera.transform.position.x > transform.position.x - 0.02f)
{
movebacktoCharacter = false;
rb.useGravity = true;
move_script.enabled = true;
cameraFollow_script.enabled = true;
}
}
}
}
It’s not the prettiest code, and I admit to hacking my way through it, but it works. Maybe you have a better method for doing something similar – if you do please feel free to add a comment – I’d like to hear it.
This week I decided to totally redo the way I have been handling character movement.
I used to have a free ranging character controller that basically moved in the direction your joystick wanted. I never really had that as my vision for this game as I wanted a more 2.5D feel to the game and a limited number of places you could move to. I got to this point because I was working on the enemy AI scripts using a custom mesh to navigate around.
The game level (or rather endless pattern of a level) is 16 Units wide and two deep. In this case a Unit is a building component like a piece of floor or doorway or elevator shaft etc. All these components are 8 x 8 blocks in Unity scaling. This is an example of an elevator shaft:
When you build them all together it looks something like this (that’s a very basic mock up below):
So you got a forward position where you can go up the stairs on the right and a middle position where you can go up the stairs on the left (see they are set back a bit) and a very back position which is through a doorway. So basically there are three parallel positions along the X axis.
What I wanted to do was to create a “patrol point” on every floor space within that grid of a floorplan and also create a patrol point if there is a door that is open.
On the floor positioned at the top or bottom of a stair you do not have a patrol point so there is never anyone at the top or bottom of a stair to block you going up or to knock you back down.
This all gets created at instantiate time and every level is random so I cannot use any of the mesh or nav components that Unity provides.
So all my patrol points get made into lists when the floor is instantiated and added to an array of levels.
When an enemy AI agent starts off they read in all the available patrol point nodes on the floor and work out the available nodes around it to move to.
So the agent knows about the nodes around it in a four square plus it’s own central location.
As the game mostly scrolls along the left – right axis during game play the nodes are weighted so that travel along the X axis is more likely than the Z.
At the end of the frame after moving (in late update) the nodes list is refreshed if a new node has been reached.
How does an agent find all the nodes around it? Using a Raycast is too expensive. So on each move the agent parses the list of nodes and works out the closest in each direction.
Basically for each node in the list get the x and z and subtract it from your own then put that value in a temporary location – every node gets tested the same way and if the return value is less than the temporary value then you got your closest node in that direction. You would need to do this four times (left, right, forward and back) and handle the null values when there is no space to move next to you.
At an agent update interval when a new node is reached we first check all the nodes in the list and make a new list of nodes on the floor and our closest points. This gets added to the basic agent control behaviour of “looking around” where the AI stays in one spot and looks left and right in a rotation. In all cases if they are looking left and the character is right then they cannot pursue him.If he fires then they will turn. All of these behaviours are then blended by weight.
I’m not sure if I will continue with this method for the character controller but it’s pretty good for the enemy AI scripts.