Unity: How to Create a Cut-Scene with Cinemachine and Unity Recorder

Hi Xander here,

For Endless Elevator we wanted to do an Introduction Scene for the game. The gameplay, as the name suggests, consists of climbing endless elevators and escalators. The player navigates floor after floor in the bad guys luxury hotel and tries to climb as high as possible while defeating the bad guys. It’s a 2.5D top down scroll mechanism and clipped into the limits of the building. Just dumping the player into the start of the game in the levels of the building felt a little weird as there was no context to where you were in the story. Hence the need for an opening shot to set the scene and to literally drop the player into the game.

Our hero flies into the enemies’ oasis headquarters in a helicopter and storms into the foyer of their luxury hotel. We mocked up a scene and a helicopter in Blender and imported the assets into the main foyer scene of the game. I hadn’t used Unity’s Cinemachine before but wanted to try it out. Previously, in other projects, we had captured gameplay for cut-scenes using external software and video editing suites which was OK but the experience with Cinemachine and Unity Recorder was way smoother. It was much easier to work with and much better quality avi files. Plus we didn’t have to do custom scripts for switching cameras and panning. It was so easy it kind of made me excited about movie making with Unity but you know I don’t need another distraction.

To start working with Cinemachine and Unity Recorder you can download them using the Package Manager. Unity Recorder has only recently been added (it’s still also on the Asset Store) so you need to enable the “Preview Packages” selection from the Advanced menu in the Package Manager.

Cinemachine in the Package Manager

Have a look at the Unity Manual and tutorials for more info about Cinemachine and Unity Recorder.

Below is a screen shot of my scene in Unity. You can see the main building in green and the surrounding buildings and water in the bad guys oasis HQ. The helicopter is just visible down where the camera sight lines join and on the left in the Hierarchy you can see my Timeline component and my two vcams (Virtual Cameras).

The Timeline is where all the magic happens and was very easy to set up.

First we did a few animations on the helicopter to fly it in to the building and make the rotor spin. Then we added an animation to move the character from the helicopter into the building (which looks terrible but remember this is a quick mock up)

The Helicopter Animation

We dragged this animation into a new Animation track on the Timeline object (right click and Add Animation Track). Then we created two Virtual Cameras in the scene. One Camera (vCam1) was set using the properties in the Inspector to automatically Loot At and Follow the helicopter. This means that where ever we flew the Helicopter the camera would follow it round from behind at a set distance and keep it in the frame automatically. This was really fun when we had it under manual control for testing and worked well when under the control of the Animator. We used a preset for a bit of camera jitter and shake to mimic a real camera man in a second helicopter.

The second Camera (vCam2) was stationary at the building site but set to Follow (ie. Look At) the Main Character. We timed the cut from one camera to the other so that once the helicopter landed it would pass control to the second camera and seamlessly start focusing on the Player. This was so easy it was ridiculous. The Camera objects were added to the Timeline and the split where we cut from one camera to the next is clearly visible in the screenshot below (two triangles). The first time I ran it and that view cut automatically from one vcam to the other gave me an enormous sense of satisfaction like I’d just been named a modern day Hitchcock.

The Timeline Editor Window

To record what we had done as an AVI we opened the Recorder Window:

Opening the Recorder Window.

We used the default settings and triggered the Start of the Recording with the start of the animation by having it in the Timeline. The Capture target was the Game View (you can also get the other elements of the Editor if you need it). The Output Resolution is interesting as you can use the Size of the Editor Game window on your screen or set it to standard default movie formats.

The Recorder Window

That’s about it. We hit Play in the Editor and the Timeline starts the Recording of the AVI and synchronises the Animations and the Camera movement automatically. Once we are done we are left with a good quality moving image of our game screen that we will use as the cut-scene to drop the player into the start of our game. Obviously what we got here is just a “screen test” but I was really happy with what we could achieve in just a few hours and with so little complexity.

Xander out…

Unity Audio vs Wwise

To start with I wanted to do a general investigation into Wwise the integrated audio package for Unity by AudioKinetic. When I started working through it I figured it would be more interesting to look at Wwise in comparison to Unity’s own audio API and mixer components which have been around since Unity 5.

To do that I’m going to compare a game in three different builds. Build one is it’s original state with simple scripts that run an AudioSource.Play() method. The Second build I will add another layer of complexity by using the Unity built in Mixer and see if there are any differences or advantages. Lastly I’ll redo the project with the Wwise API and investigate how that impacts build size and project complexity and weigh it up against the previous two builds. Mostly I’m looking for difference in performance between the three builds, build size and complexity, and weighing that up against ease of implementation and flexibility.

I refreshed an old project called “MusicVisualiser”that I started for my Five Games in Ten Weeks Challenge. The game is like a singing solar system. There is a bunch of “planets” in the night sky that play a set piece of music when clicked. It’s a really simple concept and project but I think it will work for this comparison as the parameters can be limited to just a few audio tracks but we can play with spacing and roll-off and other advanced audio features.

Let’s have a look at the game first.

These “planets” are simple native Unity sphere meshes with an Audio Source component and a particle system that’s triggered when it’s clicked. You can see in the Audio Source that we are not using a Mixer for Output and all the Audio sources compete for resources and play at their default volume and priority.

The PlayMe script just takes in the AudioSource and plays it:

   public AudioSource my_sound;
        
  if (Input.GetMouseButtonDown(0))
  {
      RaycastHit hitInfo;
      target = GetClickedObject(out hitInfo);
      if (target != null && target.name == my_name)
      {
          _mouseState = true;
          screenSpace = Camera.main.WorldToScreenPoint(target.transform.position);
          offset = target.transform.position - Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, screenSpace.z));

          my_sound.Play();   // This is the Audio Component!
          var expl1 = GetComponent<ParticleSystem>();
          expl1.Play();
      }
  }

Pretty simple right. This is what the project looks like in the Profiler when it’s running and being actively engaged with. At that point we are looking at two Audio Sources are playing:

This is the build size from the Editor Log with our Audio Files broken out:

Build Report (Audio.Play)
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 547.5 kb 1.4%
Shaders 188.0 kb 0.5%
Other Assets 1.4 kb 0.0%
Levels 38.3 kb 0.1%
Scripts 941.9 kb 2.4%
Included DLLs 3.9 mb 10.2%
File headers 9.3 kb 0.0%
Complete size 38.6 mb 100.0%

Used Assets and files from the Resources folder, sorted by uncompressed size:
204.3 kb 0.5% Assets/SomethingLurks_AAS.wav
164.5 kb 0.4% Assets/Step2Down_AAS.wav
136.9 kb 0.3% Assets/Underwater_AAS.wav
41.8 kb 0.1% Assets/M1_M12_37_ThumPiano_Aflat1.wav

Unity Audio with Mixer

Now we add in the Mixer component to the project:

Then add a couple of Channels to the Mixer to split the audio between left and right. Then the Audio Sources are dropped into one or another of the Mixer Channels:

Adding the Mixer as the Output source

Next for bit more interest I added some effects in the Mixer. Here is where we see the advantages of using the Unity Mixer. Sounds can be manipulated in complex ways and the Audio Output chain be defined with presets and levels etc.

If we have a look at our Profiler while running with the new component we cannot really see any great differences. The ‘Others’ section of the CPU Usage is a bit higher and the Garbage Collector in the Memory is pumping regularly but the Audio Stats look pretty much unchanged:

Profiler Mixer

Mind you this is a fairly low utilising game so we might get wildly different stats if we were really putting the system under the pump but I’m not performance testing here just comparing run states between the two builds.

Next if we build the game and have a look at the Editor Log the only thing that’s changed here is that the “Other Assets” size is a KB higher (Complete size has not been changed):

Build Report (Mixer)
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 547.5 kb 1.4%
Shaders 188.0 kb 0.5%
Other Assets 2.3 kb 0.0%
Levels 38.3 kb 0.1%
Scripts 941.9 kb 2.4%
Included DLLs 3.9 mb 10.2%
File headers 9.3 kb 0.0%
Complete size 38.6 mb 100.0%

Unity with Wwise

Next we are going to add Wwise to the Project. This is the basic workflow. In the Wwise Launcher we register our project and on the first tab we are presented with three Hierarchies.

Project Audio Explorer in Wwise

The Master-Mixer Hierarchy – does what it says.
The Actor-Mixor Hierarchy – where most of your game audio develops (use the SoundSFX defaults).
Interactive Music Hierarchy – other stuff we won’t get into.

Events Tab

The next tab along is the events tab where you link your audio to game events. You can define your event here (use the default work unit).
Once you got the event there you can associate the event with the audio in the Action List.

SoundBank Tab – this is the bit that get’s imported into your project.

Next you generate a SoundBank with Wwise that includes your audio and the code for the API calls to trigger sounds. You export that SoundBank into your game engine and link up the calls in your code.

To Get Started with Wwise

To get started make an account with Audiokinetic and download the Wwise Launcher. The Integration package for Unity can be downloaded and installed directly from the WWise Launcher.

In the Wwise Launcher there is a WWISE tab that you can install and start the application from. Once you open it up you need to register your project within the launcher so Wwise can track you ūüôā ( click on the key icon next to your Wwise project and select ‘Register your Project to obtain a License’). Wise will run in Trial mode which restricts the SoundBank content to 200 media assets and cannot be used for Commercial purposes. Pricing for licensing is on their site but this is not a sales piece so if you want it you can look it up.

There are a bunch of plugins by Audiokinetic and their partners available and also Community offerings like AudioRain a dedicated rain synth with 60 procedurally generated presets for rain. What’s not to love about that!

There is a Wwise SDK for authoring your own plugins and a Wwise API which allows you to integrate into any engine, tool or application.

Audiokinetic do certifications that covers audio integration workflows,
mixing virtual soundscapes, working with sound triggering systems, and performance optimisation :
https://www.audiokinetic.com/learn/certifications/

Basically in Wwise you let the Launcher do all the setting up for you. You will install the Wwise binaries from here and manage your platform versions. Projects can be integrated here and if you don’t have the necessary plugins installed the Wwise Launcher will install them for you.

Integrating the MusicVisualiser project with Wwise.
This is how big the Wwise Integration packages and binaries are.
Applying…
Done!

That’s basically it for the set up of Wwise and Integration with your Project. Next up we will have a look at what this has done to the Unity Console.

Wwise in Unity

First thing we see is a bunch of errors that can be safely ignored. As we did not perform any configuration of our project in Wwise with audio files and events there was no SoundBank to generate yet.

Unity – Initial Errors can be ignored if you have not generated your SoundBank yet.

In the Unity Console we have a new tab in our editor. The Wwise Picker Tab contains all the elements of the Wwise project that have been imported with the project integration. There is also a Wwise Global Game Object in the Unity Hierarchy and all the Wwise folders in the Assets folder.

Unity Editor
The WwiseGlobal Game Object

Under the Component pull down there is a whole slew of Ak (AudioKinetic) options.

Wwise Components.
Wwise Configuration Settings.

I know there has been a lot of “show and tell” in this post but I’m going to keep going and show the process of importing the audio into the Wwise Project, creating Events, and Generating the SoundBank.

Working in Wwise

In the Wwise Project Explorer I right click on the Default Work Unit and import the audio files that were part of my project. (I’ve stripped the raw files out of my project for now and removed all the Mixer components and etc.).

Importing Audio Files into the Wwise Project.
This is what the files look like.
Right click on the file to create a new Event (which can be called in the Unity code).
Here is the event created for “Play”.
And all my “Play” events.

Finally a SoundBank is generated from which the Unity project can access the sound files through the AudioKinetic API.

Generating a SoundBank

Wwise Audio in Unity

When we go back to our Unity Editor and Refresh the Project and Generate SoudBanks we are presented with the following in the Wwise Picker. We can now access these files and and drag them on to our game objects directly. It’s that simple. Drag a sound from the Picker onto a Game Object and it automagically creates a component that is immediately accessible from within the editor.

The new audio imported into the Wwise Picker.

Below the Play_Underwater_AAS event and audio file has been added to the Sphere Game Object.

The Trigger, Actions, and Callbacks can all be configured and access through the API. In my case I easily integrated the functionality I wanted with only one line change to my attached PlayMe.cs script that we looked at above. So now instead of my audio coming from the AudioSource component referenced by my_sound the audio is played by the AKSoundEngine.PostEvent.

            //my_sound.Play();
            AkSoundEngine.PostEvent("Play_Underwater_AAS", this.gameObject);

Actually getting Wwise installed and set up and integrated with my Project was very very easy but not without bumps. It takes a very long time for packages to download and I had a bit of trouble upgrading my Wwise Launcher from an old version (it got stuck! and I had to remove it by hand and re-install). When I did have issues I got some very excellent help from AudioKinetic and after logging a case was emailed directly by a real person (which honestly was so surprising and wonderful to get that kind of support from a company when I’m on a trial license with no formal support agreement or rights).

So lets have a look at the differences in performance and package size. The first thing you notice with the Profiler below is that there is very little difference in performance but we can no longer see our audio stats as it’s been abstracted away from the Unity Engine. The Graph still shows the resources being used by Audio and the Total Audio CPU seems to be up to a third lower than the native Unity Audio statistics. It looks like it’s being clamped at just over 1.2. MB instead of regular peaks over 3 MB.

Profiler with Wwise Audio running.

The Build Report is only a couple of MB larger for the total project size:

Build Report
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 0.0 kb 0.0%
Shaders 188.0 kb 0.5%
Other Assets 7.3 kb 0.0%
Levels 38.5 kb 0.1%
Scripts 1.3 mb 3.1%
Included DLLs 3.9 mb 9.7%
File headers 13.2 kb 0.0%
Complete size 40.5 mb 100.0%

Basically a 2 MB difference! The Sounds have been extracted away as a file in the Build Report and we assume they are now part of “Other Assets” above.

I’m kinda blown away by how how little additional file size there is to the build considering the additional libraries code and available complexity that Wwise adds. There is literally a plethora of options and effects that we can play with in the Wwise package. It’s a bit like the excitement I got after the install of my first real Audio DAW. The scope is part boggling and part fantastical wonder at where we can go next. (Audio does get me unusually stimulated but that’s to be expected and tempered accordingly).

The questions I wanted to answer with this whole experiment was 1. Would including an audio middleware like Wwise make my Project more complex and difficult to manage? 2. Would the added Package make my build much larger? and 3. Would the performance of the Audio package be as good as the simple Unity Audio API? The answers are: No. No, and Yes. So I’m pretty happy with that and if the cost point of using the licensed version of Wwise is balanced out against the advantages of using it in the total cost of the Project then I would most definitely one hundred percent go for it.

Demo Text Adventure Game Released on Google Play

Hi Harmony here….

We released the demo game for our Text Adventure framework on the Google Play Store this week.

https://play.google.com/store/apps/details?id=com.ZuluOneZero.AdventureText


The Code for the Project is available on GITHUB:

https://github.com/zuluonezero/AdventureText

Quick Start Instructions for building a 2D Unity project are available on the README.

More detailed information is available in the ZuluOneZero – DevBlog.

See our posts about the development of this game and more information on how to configure the project: 

I hope you enjoy playing with it.

If you have any questions or need support you can email us at zuluonezero.z10@gmail.com.

Harmony out…

Endless Unity Camera Tricks

In our game currently under development called Endless Elevator I decided to add a new feature of more depth.  The game is 2.5D and mostly sits in a very shallow Z axis, a limited X axis, and an endless Y axis. As the name suggests your character is inside a never ending building trying to kill or avoid the enemies by escaping up elevators and escalators. The mock-up of a level below in the Scene view of Unity gives you the picture.  

Endless Elevator Opening Level Spawn

The player view is a much smaller slice of this level…about this much:

Roughly how much the camera views

I had the idea that I wanted to extend this playing field into a deeper third dimension where the character could walk down a hallway,  away from the camera, and seemingly deeper into the building.  Instead of the camera following the character down the hall on the Z axis (as it follows him on the X and Y) I wanted to pan around the edge of the building and take up the character again on the new parallel so that it looks like the camera is turning Ninety degrees and that we are looking at a new side of the building.  

Have a look at the .gif below and I’ll try and explain that better. The top half of the image is the Scene view and the bottom half is what the camera (and the player) sees in the game.  In the top half  Scene view you can see my character in green and highlighted by the handler arrows. He scoots around a bit and then disappears down the hall. When he gets to a set depth it triggers the camera to move in the game window below and you will see the levels reconfigure into a new building face (note the elevators and escalators and doors will be in different positions).

On the bottom half of the .gif that shows what the player sees it looks like once the character disappears down the hall the camera pans right, looks at the edge wall of the building as it goes around the corner in a Ninety degree turn, and then follows the character on the new level again.

(Watch the top half for a bit then the bottom half)

You can see in the Scene view we are not really moving anything with the building. It just recreates the levels. The camera is doing all the work.  It’s not perfect yet and without any background around the building to relate the movement to it’s a bit hard to tell if we are turning Ninety or One Hundred and Eighty degrees on that camera flip but it’s getting there.

It took a while to work out how to do this and I tried several different methods but this is the basic logic of the camera move script that is attached to the character.

  1. The movement of the camera is triggered when the character moves past a certain point on the Z axis.
  2. Stop the regular character and camera movement functions by disabling that scripted behaviour on each object.
  3. I set up an empty game object called the CameraLookAtPoint that hovers at the end of the building on the far Right of the X axis.
  4. Pan the camera Right toward the CameraLookAtPoint.
  5. When the camera gets to within a certain distance to the LookAtPoint it starts to rotate towards it.
  6. The camera moves around the LookAtPoint so that it faces that end wall of the building as it turns.
  7. At this point while the only thing the camera can see is that blank edge wall of the building I destroy the old level we just walked out of down the hall and create a new randomly generated level.
  8. This is the great illusion! The camera is then moved instantly to the far Left of the building (the opposite end) and it appears as if we have just turned a corner.
  9. Lastly the camera picks up the character again and we hand control back to the normal camera and character movement scripts.
This is where the CameraLookAtPoint sits that the camera rotates around  Ninety degrees as it gets to the edge of the building.

I’ll post the whole script below with comments but I’ll walk through the interesting bits here. 

To start off with I needed to grab references to several external elements like the camera, the level instantiating script, the character controller script, and the characters Rigidbody. (I needed the rb because when the levels were destroyed and recreated gravity would take hold between them and the character would fall into the endless abyss!)

I had to generate a series of “if” conditionals and Boolean flags to control the movements of the camera. This was surprisingly hard to get right. It’s often not intuitive when you are in the Update function what the looping iterations will do with your code but this allowed me to slow things down and get control back. 

The calls to the instantiateScene1 script were needed to copy variables used there in the main flow of the game to track what level we were on and how high up the building we had climbed. That way I could decouple that mechanism from this one and happily destroy levels and recreate them without interrupting the flow of the rest of the game.

 


using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class SpinLevelDownHall : MonoBehaviour {
public GameObject cameraLookAtPoint;
public bool triggered;
public bool cleanAndRebuild;
public bool moveRightEnd;
public bool moveRightEndBack;
public bool moveLeftEnd;
public bool movebacktoCharacter;
public int levelHolder;
public float cleanUpHeightHolder;
public bool firstRunHolder;
public Vector3 target_camera_Position;
private GameObject the_camera;
private RigidbodyCharacter move_script;
private CameraFollow cameraFollow_script;
private InstantiateScene1 instantiate_script;
private Rigidbody rb;
// Use this for initialization
void Start () {
the_camera = GameObject.FindGameObjectWithTag("MainCamera");
move_script = GetComponent();
cameraFollow_script = the_camera.GetComponent();
instantiate_script = GetComponent();
rb = GetComponent();
}
// Update is called once per frame
void Update () {
if (transform.position.z > 15)
{
triggered = true;
moveRightEnd = true;
cleanAndRebuild = false;
if (triggered)
{
// Stop the character and the camera moving
move_script.enabled = false;
cameraFollow_script.enabled = false;
if (moveRightEnd)
{
// Set the target camera position on the x axis at the far right of the building
Vector3 target_camera_Position = new Vector3(78f, the_camera.transform.position.y, the_camera.transform.position.z);
// Set the camera look at point on the y axis (so it's on the same level as the player)
cameraLookAtPoint.transform.position = new Vector3(cameraLookAtPoint.transform.position.x, transform.position.y + 4f, cameraLookAtPoint.transform.position.z);
// start moving the camera
the_camera.transform.position = Vector3.MoveTowards(the_camera.transform.position, target_camera_Position, 50 * Time.deltaTime);
// When you get close to the end start swinging the camera around to look at the wall
if (the_camera.transform.position.x > cameraLookAtPoint.transform.position.x - 10)
{                        the_camera.transform.LookAt(cameraLookAtPoint.transform);
}
// When you get really close to the first position move the camera beyond the wall to the side
if (the_camera.transform.position.x > target_camera_Position.x - 0.5f)
{
target_camera_Position = new Vector3(78f, the_camera.transform.position.y, 4f);  // 4f is perfect when the camera is at -90 deg
moveRightEnd = false;
moveRightEndBack = true;
cleanAndRebuild = true;
if (moveRightEndBack)
{
the_camera.transform.position = Vector3.MoveTowards(the_camera.transform.position, target_camera_Position, 50 * Time.deltaTime);
// When you get really REALLY close to the second position move the camera to the negative X Axis side
if (the_camera.transform.position.z > target_camera_Position.z - 0.2f)
{
moveLeftEnd = true;
if (moveLeftEnd)
{
target_camera_Position = new Vector3(-78f, the_camera.transform.position.y, 4f);  // The other side
the_camera.transform.position = target_camera_Position;  // snap move the camera
the_camera.transform.LookAt(cameraLookAtPoint.transform);
moveRightEndBack = false;
moveLeftEnd = false;
movebacktoCharacter = true;
}
}
}
}
}
if (cleanAndRebuild)
{
// Call cleanup on everything
rb.useGravity = false;  // so the character doesn't fall through the floor
//cleanUp;
cleanUpHeightHolder = instantiate_script.floorCntr;
cleanUpHeightHolder = cleanUpHeightHolder * 8;  // so it cleans up all the floors
firstRunHolder = instantiate_script.firstRun;
instantiate_script.cleanUp(cleanUpHeightHolder, firstRunHolder);  // cleanup height is usually two levels below the character - we are raising it to two levels above him
//then make three levels
levelHolder = Mathf.RoundToInt(instantiate_script.player_level);
instantiate_script.makeLevel(levelHolder);
instantiate_script.makeLevel(levelHolder + 8);
instantiate_script.makeLevel(levelHolder + 16);
cleanAndRebuild = false;
}                
triggered = false;  // makes it only run once
}
}
if (movebacktoCharacter)
{
// new position is back to the character
// Start breaking and making your new levels in here
// first move the character
transform.position = new Vector3(transform.position.x, transform.position.y, -4);
target_camera_Position = new Vector3(transform.position.x, transform.position.y + 4.9f, -20.51f);  // the starting camera position
the_camera.transform.position = Vector3.MoveTowards(the_camera.transform.position, target_camera_Position, 50 * Time.deltaTime);
the_camera.transform.LookAt(transform);
// Once again when you get close to your original camera position disable and enable normal camera tracking again
if (the_camera.transform.position.x > transform.position.x - 0.02f)
{
movebacktoCharacter = false;
rb.useGravity = true;
move_script.enabled = true;
cameraFollow_script.enabled = true;
}
}
}
}

It’s not the prettiest code, and I admit to hacking my way through it, but it works.  Maybe you have a better method for doing something similar – if you do please feel free to add a comment – I’d like to hear it.

Zulu Out.

Making a Custom Navigation Mesh for AI

Hi Xander here…

This week I decided to totally redo the way I have been handling character movement.

I used to have a free ranging character controller that basically moved in the direction your joystick wanted. I never really had that as my vision for this game as I wanted a more 2.5D feel to the game and a limited number of places you could move to. I got to this point because I was working on the enemy AI scripts using a custom mesh to navigate around.

The game level (or rather endless pattern of a level) is 16 Units wide and two deep.  In this case a Unit is a building component like a piece of floor or doorway or elevator shaft etc.  All these components are 8 x 8 blocks in Unity scaling. This is an example of an elevator shaft:

When you build them all together it looks something like this (that’s a very basic mock up below):

So you got a forward position where you can go up the stairs on the right and a middle position where you can go up the stairs on the left (see they are set back a bit) and a very back position which is through a doorway. So basically there are three parallel positions along the X axis.

What I wanted to do was to create a ‚Äúpatrol point‚ÄĚ on every floor space within that grid of a floorplan and also create a patrol point if there is a door that is open.

On the floor positioned at the top or bottom of a stair you do not have a patrol point so there is never anyone at the top or bottom of a stair to block you going up or to knock you back down.

This all gets created at instantiate time and every level is random so I cannot use any of the mesh or nav components that Unity provides.

So all my patrol points get made into lists when the floor is instantiated and added to an array of levels.

When an enemy AI agent starts off they read in all the available patrol point nodes on the floor and work out the available nodes around it to move to.

So the agent knows about the nodes around it in a four square plus it‚Äôs own central location. 

As the game mostly scrolls along the left – right axis during game play the nodes are weighted so that travel along the X axis is more likely than the Z. 

At the end of the frame after moving (in late update) the nodes list is refreshed if a new node has been reached.

How does an agent find all the nodes around it?  Using a Raycast is too expensive.  So on each move the agent parses the list of nodes and works out the closest in each direction.

Basically for each node in the list get the x and z and subtract it from your own then put that value in a temporary location – every node gets tested the same way and if the return value is less than the temporary value then you got your closest node in that direction. You would need to do this four times (left, right, forward and back) and handle the null values when there is no space to move next to you. 

At an agent update interval when a new node is reached we first check all the nodes in the list and make a new list of nodes on the floor and our closest points. This gets added to the basic agent control behaviour of “looking around” where the AI stays in one spot and looks left and right in a rotation.  In all cases if they are looking left and the character is right then they cannot pursue him.  If he fires then they will turn.  All of these behaviours are then blended by weight.

I’m not sure if I will continue with this method for the character controller but it’s pretty good for the enemy AI scripts.

Getting a Foot in the Door of Game Design

First of all – sorry about the misleading title – this post is about getting the doors working in the Endless¬†Elevator¬†game that we are currently developing. I thought it was a good pun and that as this post is all about our development process that it wasn’t too bad. The only career advice I got is just to start making games…any games.

You’d think making a door that opens and closes would be a pretty simple thing to do but surprisingly it wasn’t.

I designed and built some 3D models of building components in MagicaVoxel and exported them as .obj files. I use MagicaVoxel because it’s free and really quick and simple to use. When I model components I can be sure that they are all exactly the same scale and that all the different bits (no matter when I made them) are going to fit together without any hassle. But the models are not optimised for game engines as they have a reasonably high poly count due to the modelling process within the program. Most of the models come out with heaps of unnecessary triangles that need to be managed first. In most cases I will import the object into Blender and use the “decimate” modifier on the planes (with an angle of about 20 if you are interested). In the case of the door object it was pretty simple, it is just a door after all, and I didn’t need to optimise it.

Here is what the door looks like in MagicaVoxel:

Notice that the door object is sitting just forward of the enclosing frame and that when exporting the object the center is at the middle bottom plane of that frame. The door is off center because it’s modelled to fit exactly in a doorway that sits exactly in that position within the frame. This saves me a huge amount of time lining everything up when it gets to Unity as the position of the door (or any other object for that matter) is already in the right spot. The problem is that the point of origin for the door is in the wrong spot. It exports a few units behind the door and on the floor! This becomes important when you try and rotate that object (like you would when opening and closing a door) and the pivot point is not where the hinges should be.

To fix this I had to import the .obj file into Blender and reposition the point of origin for the model.

This is what it looks like in Blender when I did this:

To do it I selected the edge that I wanted the door to pivot on when opened.

Then in Edit Mode:
Mesh -> Snap -> Curser to Selected

In Object Mode:
Object -> Transform -> Origin to 3D Curser

So that puts the curser onto the correct position in the middle of that edge where the hinges would be and resets the point of origin (which is where it will pivot when rotated in Unity) to the right spot.

Once we were all imported into Unity and looking good I set up a prefab for the “Doorway” object with the door as a child object. The doorway object has a bunch of colliders to stop the player walking through the walls and a big sphere collider where the door would be to trigger the open / close function when the player walks into it.

This is what the doorway looks like in Unity:

Next I scripted up a few options for opening the door. I’ll post the script at the end of this article but basically there were three options of opening the door that I wanted to test. (Actually I tried it about six ways but whittled it down to the most simple methods – and just as an aside there is an actual “hinge” object type in Unity if you ever need it).

This is how the script looks in the editor:

Notice the slider at the bottom to control how far I want the door to open. It’s really handy to have this when playing in the editor and getting your settings right. If you want to know more about using it see this post

The three tick boxes are for testing the three different ways of opening the door.

Snappy was a quick simple transform of the position from open to closed with no “in betweening”. It looks a bit unnatural as the door magically goes from closed to open but it’s not too bad and most people are pretty used to similar behaviour in video games.

The active line in the code is:
the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);

The next method works more like a door should. In that it swings open and closed the whole way in a steady fashion. But the big problem with this method was that it was OK when the character was going into the doorway and with the swing of the door but when it was time to come back out of the doorway the door was in the way. There was not enough room to trigger the door opening from the other side without being hit by the door as it opened. Plus if the player enters the collider from the wrong trajectory the character gets pushed through a wall by the swinging door which is sub-optimal. I called this method InTheWay!

The active line here is:
the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);

In an effort to combat this I chose to do a hybrid method that opened the door to a point that wouldn’t hit the player and then do the magic transform to get all the way open. I call this one aBitBoth. It looks a little weird too. Like there is an angry fairy pulling the door closed with a snap after the character enters.

Here are all three to compare.

Snappy

In The Way

A Bit of Both

I’m not too sure which one I’m going to use at this stage as the Snappy method works best for now but I like the In The Way method better. I looks more normal and I like that you have to wait just a few milliseconds for the door to swing (adds tension when you are in a hurry to escape a bullet in the back). I could do something like halt the player movement from the rear of the door when it triggers to open from the closed side or maybe play around with the radius of the sphere. Neither solutions seem like great ideas to me right now but something like that will need to be done if I’m going to use that method. Maybe I could just have the door swing both ways and open to the outside when he is behind it but that’s probably a bit weird for a hotel door.

Here is that script that I was testing with:

using UnityEngine;
public class OpenDoor : MonoBehaviour {
public bool openMe;
public GameObject the_door;
public bool snappy;
public bool inTheWay;
public bool aBitBoth;
public Vector3 targetPositionOpen;
public Vector3 targetPositionClosed;
[Range(0F, 180F)]
public float turningOpen;
void Start ()
{
targetPositionClosed = new Vector3(0f, 180f, 0f);
targetPositionOpen = new Vector3(0f, turningOpen, 0f);
}
void Update()
{
if (openMe)
{
OpenMe();
}
else
{
CloseMe();
}
}
private void OpenMe()
{
if (inTheWay)
{
if (the_door.transform.rotation.eulerAngles.y > turningOpen)
{
the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);
}
}
if (snappy)
{
the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);
}
if (aBitBoth)
{
if (the_door.transform.rotation.eulerAngles.y > turningOpen)  // 144f
{
the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);
}
else
{
the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);
}
}
}
private void CloseMe()
{
if (inTheWay)
{
if (the_door.transform.rotation.eulerAngles.y <= 180)
{
the_door.transform.Rotate(Vector3.up * 90 * Time.deltaTime);
}
}
if (snappy)
{
the_door.transform.rotation = Quaternion.Euler(targetPositionClosed);
}
if (aBitBoth)
{
if (the_door.transform.rotation.eulerAngles.y <= turningOpen)  // 144f
{
the_door.transform.Rotate(Vector3.up * 90 * Time.deltaTime);
}
else
{
the_door.transform.rotation = Quaternion.Euler(targetPositionClosed);
}
}
}
void OnTriggerEnter(Collider col)
{
string colName = col.gameObject.name;
Debug.Log("Triggered OpenDoor!!! : " + colName);
if (col.gameObject.name == "chr_spy2Paintedv2" || col.gameObject.name == "BadSpy_Package(Clone)") 
{
openMe = true;
}
}
void OnTriggerExit(Collider col)
{
if (col.gameObject.name == "chr_spy2Paintedv2" || col.gameObject.name == "BadSpy_Package(Clone)") 
{
openMe = false;
}
}
}

Why Normalize()

I’ve been doing some work on the AI for enemy behaviours for an unreleased game Endless Elevator and have been delving into the book “Unity 2018 Artificial Intelligence Cookbook – Second Edition” by Jorge Palacios.

It uses the Normalize() function regularly to record the direction of an object in relation to another object and it got me thinking about the usefulness of this function.  You can see why something like knowing the direction of the Player could be good for an enemy AI behaviour but I wanted to investigate more deeply about how this worked and how I could use it.

One of the things I hadn’t consciously been aware of, but is obvious once you point it out, is that a Vector3 can define a location (ie. a point in space 0, 0, 0) but it can also define a direction if you have a starting position and a target position.

A good example of a Vector being used to denote a location and a direction in Unity is the Ray.
The Ray consists of two Vector3 data points. The first Vector3 is the source position the ray is taken from and the second Vector3 is the direction of the target.

When a direction Vector3 is Normalized it keeps it’s direction but it’s length (how far away one object is from another) is set to a point between 0 and 1.¬† In Unity they round down to 0 if the objects are very close to each other (anything under about 0.04f of a unit) anything above that gets Normalized up to 1.

As an example we can simplify a bit by limiting ourselves to a Vector2.

A Vector2 can also be either a point or a direction depending on what you want to use it for.
If it’s (3, 0) then it could be either a point at x=3, y=0, or a direction along the x axis (one dimension) with a length of 3 and a slope of 0.
If you Normalize() that example the Vector2 becomes (1, 0). It will have a length of 1 but still be pointing in the x direction.
If you add the y value (dimension) into the picture where x=3, y=3 the Normalized value becomes (0.7, 0.7). So without going into the maths of why … you can see that the axis/dimensions impact on each other to define the Normalized direction.

We can extrapolate this into three dimensions without too much difficulty but I’m not going to describe that here as it’s not that necessary to understand. We only need to grasp what this means and what sort of use you can make of it in a Unity project.

For Example…

This is a simple use case of why you might want to use Normalize().

(Note that there is also a Normalized() function where the current vector is left unchanged and a new normalized vector is returned).

I’ve created an example project where the relationship between two objects (a Cube and a Sphere) will give us a direction (using Normalize()).¬† The direction is applied to a transform.Translate function on a third object (a Cylinder) which now moves in the given direction.

In this script below attached to our Green Cube (the unchanging point of origin – 0, 0, 0) we define the Pink Sphere as the target.¬† In the script we get the difference between the position of the sphere and the Cube and Normalize() it to make a direction. It doesn’t matter how far the Sphere is from the Cube the direction stays the same.

Also to extend the example I’ve added a Ray with the same origin as the Cube (0, 0, 0) that will always point to the Sphere.¬† This shows that the Normalized Vector3 is the same as the direction of the Ray pointing to the Sphere.

Normalizing Direction

The public Vector3 lineDirection is the difference between the position of the Cube and the Sphere. This is the variable we will pass to the Cylinder to move it in the required direction.

using UnityEngine;
using UnityEngine.UI;
public class HowToNormalizeDirection : MonoBehaviour {
public Vector3 lineDirection; 
public GameObject target;   
public Text vector_text;
public Text norm_text;
public Text ray_direction;
public Ray whatsTheRay;    
// Use this for initialization
void Start () {
lineDirection = new Vector3();
whatsTheRay = new Ray();
}
// Update is called once per frame
void Update () {
lineDirection = target.transform.position - transform.position;  
vector_text.text = "Vector3 target.transform.position :" + lineDirection;
lineDirection.Normalize();  
norm_text.text = "Vector3 Normalized() :" + lineDirection;
whatsTheRay = new Ray(transform.position, target.transform.position);
ray_direction.text = "Ray : " + whatsTheRay.ToString();   
}
}

Moving the Capsule in the Same Direction

In this script below attached to the Cylinder we get the direction from the lineDirection variable above but we could also have used the inbuilt Ray.direction function to return the same result.

using UnityEngine;
public class MoveCapsule : MonoBehaviour {
public Vector3 direction;
public float speed;
// Use this for initialization
void Start () {
direction = Vector3.zero;
speed = 0.50f;
}
// Update is called once per frame
void Update () {
var script = GameObject.FindWithTag("Cube").GetComponent();   
direction = script.lineDirection;
//direction = script.whatsTheRay.direction;  
transform.Translate(direction * speed * Time.deltaTime); 
}
} 

In the Video below you can see this demonstrated. The position of the Pink Sphere is being manually manipulated using the Transform on the right. The text areas expose the vector3 values being accessed by the objects and scripts.

So what’s going on in this video amateur hour?

The three lines of text at the top of the game scene represent:

1. The Vector3 Position of the Pink Sphere.

2. The Vector3 Normalized  direction which is the relationship between the position of the Green Cube (0, 0, 0) and the Pink Sphere (x, y, z).

3. The uses of Vector3 in the Ray.  First the position of the origin and then the Direction of the Ray which is exactly analogous to the Normalized Vector3 in point 2 above.

The uses for something like this could extend to a weapon aiming system, a custom controller or be passed into a path finding routine. This is not the only option of course there are other built in methods like transform.LookAt(target) available which may accomplish your programming goal.

During the research for this post I found the following links helpful.

https://docs.unity3d.com/ScriptReference/Vector3.Normalize.html
https://docs.unity3d.com/ScriptReference/Vector3-normalized.html (which is a different but related function)
https://forum.unity.com/threads/what-is-vector3-normalize.164135/
https://answers.unity.com/questions/52881/vector-normalization-question.html
https://www.dummies.com/education/math/calculus/finding-the-unit-vector-of-a-vector/ (for why you really don’t want to know about the math or do it by hand)
https://www.mathsisfun.com/algebra/vector-unit.html (best for simple vector explanation)
https://docs.unity3d.com/ScriptReference/Ray.html
https://docs.unity3d.com/ScriptReference/Ray-ctor.html

The Dog Run is in Production on the Google Play Store

This week we moved our latest game The Dog Run into Production on the Google Play Store.

The Dog Run is an Endless Runner for Android that supports animal welfare!

It’s a free game. There is the option to watch ads but instead of in game rewards all profits from the advertising goes to support animal welfare and animal hospitals.

The game is about taking your fun lovin’ pooch for a run. But watch out! There is a bunch of obstacles in your path. It’s a good thing your dog is a natural jumper and can run all day in all sorts of weather.

Read the review from Daikon Media here.

They say, “…¬†it‚Äôs not only the style, that I like, it‚Äôs also the unique sense of humor…the game is even fun and original.”

Feedback can be posted on the Google Play Store,  on this website in the comments,  or directly by email.

ZuluOneZero Game Design
http://www.zuluonezero.net/
zuluonezero.z10@gmail.com

Image File Size in Unity and their Impact on Start Up Time on Android

Xander here…

We have been Beta Testing our soon to be released game The Dog Run¬†and it’s been mostly OK but we had a number of issues with memory on smaller or older devices.¬† We made some gains with modifying our audio files (See this post) but were still running into niggling crashes on start up and longer than normal load times.

We were getting feedback like:

“Hey, I installed the game and couldn’t run it. When I started it there was a black screen for about 15s and then it went back to the launcher. Then each time I went back to the game there was unity and game logo fading out and again the app crashed/hanged and I was sent back to the launcher.”

(Thanks slomoian and the_blanker for all your help testing)

Obviously feedback like this is a little disheartening and far from ideal.¬† The game was running fine on every device and emulator I had access to but it’s only when you send something into the wild that you realise the full breadth of the spectrum that is the Android platform.¬† I guess this is another lesson in the importance of proper Beta testing.¬† One we hadn’t learned last time we released an app (see this old post on the perils and difficulty of finding¬†Beta Testers).

We were using adb logcat to monitor our start up problems but not finding a “smoking gun” that solved every case. It seemed to be a memory problem and often with the graphics cache so again we went back to the Unity Editor build log to investigate our image files.¬† The game uses multiple large files to ensure that our animated sprites were always in the right spot. The game is dependent on the titular Dog hitting the ground line accurately on every frame to achieve the look we wanted when he runs and the paw breaks the ground line and appears as a gap.¬† We used a “flip-book” old fashioned style of animation where each frame sits exactly on top of the old frame and everything lines up on a transparency like in classic animated movies.

By using this schema we had to keep to a certain scale that fit within the constraints of a typical Android device format. This meant that when the images were imported the POT was not going to be something we could play with easily to get performance gains.  (Image files that have a width and breadth that is a power of 2 are faster and easier for the compression functions to work with Рso 2, 4, 8, 16, 32, 64, 128, etc).  If I had the chance to do this again this is something I would probably start doing right from the beginning of development. When going through the Editor Logs we did find something interesting (get to the Editor Logs by right clicking on the arrow or tab near the Console and selecting it).

We found that some of our image files were 10 MB and a few were 2 MB.  Which was a little weird as they were all exported as layers from the same Gimp file so I must have done something in the import settings or the editor to change them.

This is a comparison of two files of the same dimensions and basically the same content but with two very different file sizes:

10.6 mb 0.8% Assets/artwork/RunOnSpot6.png

2.0 mb 0.1% Assets/artwork/DogSitHeadWag.png

The difference that I found was MIP Maps.¬† I’d selected to use MIP Maps fairly early on as it made the art work look smoother in the Editor.¬† MIP Maps are generated in the engine to make smaller more compressed versions of your artwork that can be used at longer distances from the camera where the detail is less visible. My game is 2D and has everything running at pretty much the same distance from the screen so really MIP Maps should not be required.¬† My art did look a bit better in the editor with them turned on but on a smaller device like a phone I couldn’t really tell the difference.¬† See below the difference in a file with MIP Maps selected and a file without.

With MIP Maps turned on (see the file size at the bottom and that the type is RGBA 32 bit):

The same file with MIP Maps removed (down to 2 MB and using ECT2 compression):

This is the difference that generating those MIP Maps makes. Your file is converted from the Android default compression to a larger (harder to process) 32 bit compression format.

So by turning off MIP Maps across the three hundred plus image files in my game reduced my application start up time to under a few seconds and reduced the APK file size by over one thousand MB.

This is the Build Report from the Editor Logs that shows the larger texture sizes and final build size:

Uncompressed usage by category:
Textures 1292.3 mb 94.6%

Complete size 1365.9 mb 100.0%

Compare this later Build Report with MIP Maps turn off to the original one above:

Textures 284.5 mb 82.6%

Complete size 344.6 mb 100.0%

It’s a considerable difference with little or no quality loss on most devices.¬† When I say most devices there were a few cases where the running dog did look a little tatty.¬† On very small emulated devices (3.5″ screens and low memory) the images were being scaled down quite a lot and the results were a lot less enjoyable but still an acceptable compromise considering previously the game would not run on these devices at all.

The next thing I started playing with was the different texture compression variables available for Android. I tried all of the settings (see screenshot below) in a different build and tested them against at least ten different devices with various architectures and screen dimensions and Android versions.

In each of the cases but one there was at least one test device that failed to start the game.¬† Once again exposing the issues of working with so many platform variables on Android.¬† Even when I built the APK with the (default) ETC selected one device failed the start up test.¬† So in the end the final build used the “Don’t override” setting which seemed to work on all devices.

Hopefully this is helpful to someone else out there and if it is try hitting the “Like” button below or sharing the link (the feedback keeps me going).

I found these references useful when troubleshooting my start up issues and learning more about compression on Android:

https://docs.unity3d.com/Manual/android-GettingStarted.html

https://docs.unity3d.com/Manual/class-TextureImporterAndroid.html

https://docs.unity3d.com/Manual/class-PlayerSettingsAndroid.html

https://answers.unity.com/questions/1406451/sprites-increase-android-apk-size.html

https://www.unity3dtips.com/unity-texture-compression-android-ios/

Using ADB Logcat to Debug Unity Application Start Up Times

Our game The Dog Run has been in open Beta testing for some time and we are wrapping up new builds after the feedback we received during the process.

There is still time if you want to check out the Beta edition of the game on the Play Store: https://play.google.com/apps/testing/com.ZuluOneZero.TheDogRun

We used the built in Unity Profiler to work on performance improvements on the running game.
https://docs.unity3d.com/Manual/Profiler.html

But when it came to the application start up we found that we had a problem. It was taking over 20 seconds for the game to open up after a user pressed the icon.
Obviously this was way too long as much of that time was spent sitting on a black screen.

Trouble was with the Debug builds of our APK’s running in the profiler it would only pick up on game play issues and we couldn’t identify what was causing the slowness on start up. So we switched to using ADB the Android Studio Debugging Bridge to see what was actually going on under the hood of Android system while our game was loading.

If you want more info on the ADB tool have a look at my previous post here.

Unity Debugging with ADB for Android

We had a handfull of APK’s that we wanted to test against on out local workstation and used APK to copy them over to our attached development device using ADB commands.
Like this:

C:\Users\<User>>adb install C:\Users\<User>\Downloads\TheDogRun_v13_debug.apk
Success

Then we ran the adb command with the logcat argument and passed that output into a file to read afterwards.
Now pumping logcat into a file generates lots of data including the buffer that was recorded before the command was run so you will get lots of info from earlier that you either need to filter out or grep through.

This is one way to filter out in adb command line but it’s not as useful as grepping through the file:
>adb.exe -d logcat -e isApplicationExternalStorageWhitelisted

It’s a pretty big log generated in just a few seconds of logging:

I preferred to have the full log in the file and use notepads and command line greps and epreps to get the info I wanted to hone in on.
On Windows I have a great Unix like tool called MobaXTerm which is really useful for this sort of work.
https://mobaxterm.mobatek.net/
It’s a Windows ssh client but includes a Cygwin type access to your local Windows file systems.

So when we were searching through the logs (and it takes a while to get used what you are looking at. I’d recommend getting on google for anything that looks interesting).

Here is an example of the log:

 

It changed a little bit with each run but this is the main info that I was using for to make my judgements about start up speed on.

 

### This is the initial call for the application
09-28 10:29:08.684 4938 4938 D StorageManagerService: getExternalStorageMountMode : final mountMode=1, uid : 10495, packageName : com.ZuluOneZero.TheDogRun

### This is the process being allocated
09-28 10:29:08.752 4938 4938 I ActivityManager: Start proc 8679:com.ZuluOneZero.TheDogRun/u0a495 for activity com.ZuluOneZero.TheDogRun/com.unity3d.player.UnityPlayerActivity

### The Window for the game is open had has focus
09-28 10:29:09.336 8679 8679 I Unity : windowFocusChanged: true
09-28 10:29:09.337 8679 8679 V InputMethodManager: Starting input: tba=android.view.inputmethod.EditorInfo@23ee73 nm : com.ZuluOneZero.TheDogRun ic=null
09-28 10:29:09.346 4938 6402 V InputMethodManagerService: windowGainedFocus : reason=WINDOW_FOCUS_GAIN client=android.os.BinderProxy@9752c36 inputContext=null missingMethods= attribute=android.view.inputmethod.EditorInfo@ea1162f nm = com.ZuluOneZero.TheDogRun controlFlags=#105 softInputMode=#20 windowFlags=#80e90500

### The Game has the Window on
09-28 10:29:09.420 4938 5124 D SamsungPhoneWindowManager: Turning screen on : com.ZuluOneZero.TheDogRun uid = 10495

### Splash Screen is coming up
09-28 10:29:09.424 4938 5124 D InputEventReceiver: channel ‘d7f15c8 Splash Screen com.ZuluOneZero.TheDogRun (client)’ ~ Disposing input event receiver.

### Splash Screen removed
09-28 10:29:09.431 3192 3192 I Layer : id=6192 onRemoved Splash Screen com.ZuluOneZero.TheDogRun#0

### This is a Debug Package but the Unity Debugger or Profiler was not attached so there is a 4-5 seconds delay while it times out waiting for the connection
48832 [Id] AndroidPlayer(samsung_SM-G930F@192.168.1.139) [Debug] 0 [PackageName] AndroidPlayer” to [225.0.0.222:54997]…
09-28 10:29:09.544 8679 8696 D Unity : Waiting for connection from host on [0.0.0.0:55070]…

09-28 10:29:14.571 8679 8696 D Unity : Timed out. Continuing without host connection.

### Scripting Engine starts up
09-28 10:29:14.682 8679 8696 D Unity : InitializeScriptEngine OK (0xe7543ee0)
09-28 10:29:14.682 8679 8696 D Unity : PlayerConnection already initialized – listening to [0.0.0.0:55070]

## Creating Open GL
09-28 10:29:14.761 8679 8696 D Unity : OPENGL LOG: Creating OpenGL ES 3.2 graphics device ; Context level <OpenGL ES 3.1 AEP> ; Context handle -1013109888

### Unity Reports it’s Unload time
09-28 10:29:22.067 8679 8696 D Unity : UnloadTime: 2.403000 ms

### Ads Start up
09-28 10:29:22.309 8679 8696 D UnityAds: com.unity3d.ads.cache.CacheDirectory.getCacheDirectory() (line:42) :: Unity Ads is using external cache directory: /storage/emulated/0/Android/data/com.ZuluOneZero.TheDogRun/cache/UnityAdsCache

### About this time the Debugging on the Application side kicks in and you start to get these sorts of messages in the log:
09-28 10:29:23.533 4938 9976 I ActivityManager: DSS on for com.ZuluOneZero.TheDogRun and scale is 1.0
09-28 10:29:23.731 8679 8696 I Unity : Building GPG services, implicitly attempts silent auth
09-28 10:29:23.731 8679 8696 I Unity : #0 0xc6578460 (libunity.so) GetStacktrace(int) 0x44
09-28 10:29:23.731 8679 8696 I Unity : #1 0xc5f5e978 (libunity.so) DebugStringToFile(DebugStringToFileData const&) 0x230
09-28 10:29:23.731 8679 8696 I Unity : #2 0xc546bf9c (libunity.so) DebugLogHandler::Internal_Log(LogType, core::basic_string<char, core::StringStorageDefault<char> >, Object*) 0xa8
09-28 10:29:23.731 8679 8696 I Unity : #3 0xc546be8c (libunity.so) DebugLogHandler_CUSTOM_Internal_Log(LogType, MonoString*, MonoObject*) 0xb4
09-28 10:29:23.731 8679 8696 I Unity :
09-28 10:29:23.731 8679 8696 I Unity : (Filename: ./Runtime/Export/Debug.bindings.h Line: 43)

09-28 10:29:24.925 8679 8850 I UnityAds: com.unity3d.ads.api.Sdk.logInfo() (line:70) :: Requesting configuration from https://publisher-config.unityads.unity3d.com/

Total start up time around 16 Seconds – we got lucky with this one.

But you know it’s system dependent so we did it another ten times for each package to make sure that the readings were comparable to our out loud counting of seconds while watching the device.

To make a long story short we started getting an idea of what happens when the app starts up.
In the end the thing that most improved our startup time was the build that had changed settings to our audio files in the Unity import settings.


Basically all audio files were converted from the Unity default to the Overide for Android setting. The background music was streamed and shorter sound effects were compressed in memory.

So hopefully if you take this advice do that up front and you won’t have to spend a fun rainy Sunday debugging the start up time problem.

Interesting as it was it would have been more fun doing promo art for the upcoming release!