Unity SVG Screen Test

I been toying with an idea for a new text adventure game. I wanted to finally start using the framework we built earlier in the year (See this post and check out the code for yourself). I also wanted to get into the new-ish support for SVG graphics in Unity. I love Vector Graphics and I think this project is going to be a great way to use their simple clean aesthetic.

I did a “screen test” of a very quick vector image using InkScape. (I have a deep respect for the Linux and Open Source community and want to thank them now for such awesome software).

To get started with SVG files in Unity 2018.x you need to import the package using the Package Manager (it’s called Vector Graphics). This allows you to pick up SVG files when you import a new asset. You can also drag and drop from your explorer into your project hierarchy.

The SVG image shown below is the three receding black squares. The SVG sprite is on a separate canvas to my text (and sorted behind it). I’ve added a couple of raw images in the same color as my background and an animator component to move them around to give the “drawing” effect.

I have to say I’m pretty happy with the resulting mock up and think this will be the way forward in developing this project. Resizing the screen view into different formats works as expected with the Vector image staying crisp (which is more than you can say for the poor four color buttons below it that use a scaled sprite – very fuzzy).

Zulu out.

Unity Audio vs Wwise

To start with I wanted to do a general investigation into Wwise the integrated audio package for Unity by AudioKinetic. When I started working through it I figured it would be more interesting to look at Wwise in comparison to Unity’s own audio API and mixer components which have been around since Unity 5.

To do that I’m going to compare a game in three different builds. Build one is it’s original state with simple scripts that run an AudioSource.Play() method. The Second build I will add another layer of complexity by using the Unity built in Mixer and see if there are any differences or advantages. Lastly I’ll redo the project with the Wwise API and investigate how that impacts build size and project complexity and weigh it up against the previous two builds. Mostly I’m looking for difference in performance between the three builds, build size and complexity, and weighing that up against ease of implementation and flexibility.

I refreshed an old project called “MusicVisualiser”that I started for my Five Games in Ten Weeks Challenge. The game is like a singing solar system. There is a bunch of “planets” in the night sky that play a set piece of music when clicked. It’s a really simple concept and project but I think it will work for this comparison as the parameters can be limited to just a few audio tracks but we can play with spacing and roll-off and other advanced audio features.

Let’s have a look at the game first.

These “planets” are simple native Unity sphere meshes with an Audio Source component and a particle system that’s triggered when it’s clicked. You can see in the Audio Source that we are not using a Mixer for Output and all the Audio sources compete for resources and play at their default volume and priority.

The PlayMe script just takes in the AudioSource and plays it:

   public AudioSource my_sound;
        
  if (Input.GetMouseButtonDown(0))
  {
      RaycastHit hitInfo;
      target = GetClickedObject(out hitInfo);
      if (target != null && target.name == my_name)
      {
          _mouseState = true;
          screenSpace = Camera.main.WorldToScreenPoint(target.transform.position);
          offset = target.transform.position - Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, screenSpace.z));

          my_sound.Play();   // This is the Audio Component!
          var expl1 = GetComponent<ParticleSystem>();
          expl1.Play();
      }
  }

Pretty simple right. This is what the project looks like in the Profiler when it’s running and being actively engaged with. At that point we are looking at two Audio Sources are playing:

This is the build size from the Editor Log with our Audio Files broken out:

Build Report (Audio.Play)
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 547.5 kb 1.4%
Shaders 188.0 kb 0.5%
Other Assets 1.4 kb 0.0%
Levels 38.3 kb 0.1%
Scripts 941.9 kb 2.4%
Included DLLs 3.9 mb 10.2%
File headers 9.3 kb 0.0%
Complete size 38.6 mb 100.0%

Used Assets and files from the Resources folder, sorted by uncompressed size:
204.3 kb 0.5% Assets/SomethingLurks_AAS.wav
164.5 kb 0.4% Assets/Step2Down_AAS.wav
136.9 kb 0.3% Assets/Underwater_AAS.wav
41.8 kb 0.1% Assets/M1_M12_37_ThumPiano_Aflat1.wav

Unity Audio with Mixer

Now we add in the Mixer component to the project:

Then add a couple of Channels to the Mixer to split the audio between left and right. Then the Audio Sources are dropped into one or another of the Mixer Channels:

Adding the Mixer as the Output source

Next for bit more interest I added some effects in the Mixer. Here is where we see the advantages of using the Unity Mixer. Sounds can be manipulated in complex ways and the Audio Output chain be defined with presets and levels etc.

If we have a look at our Profiler while running with the new component we cannot really see any great differences. The ‘Others’ section of the CPU Usage is a bit higher and the Garbage Collector in the Memory is pumping regularly but the Audio Stats look pretty much unchanged:

Profiler Mixer

Mind you this is a fairly low utilising game so we might get wildly different stats if we were really putting the system under the pump but I’m not performance testing here just comparing run states between the two builds.

Next if we build the game and have a look at the Editor Log the only thing that’s changed here is that the “Other Assets” size is a KB higher (Complete size has not been changed):

Build Report (Mixer)
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 547.5 kb 1.4%
Shaders 188.0 kb 0.5%
Other Assets 2.3 kb 0.0%
Levels 38.3 kb 0.1%
Scripts 941.9 kb 2.4%
Included DLLs 3.9 mb 10.2%
File headers 9.3 kb 0.0%
Complete size 38.6 mb 100.0%

Unity with Wwise

Next we are going to add Wwise to the Project. This is the basic workflow. In the Wwise Launcher we register our project and on the first tab we are presented with three Hierarchies.

Project Audio Explorer in Wwise

The Master-Mixer Hierarchy – does what it says.
The Actor-Mixor Hierarchy – where most of your game audio develops (use the SoundSFX defaults).
Interactive Music Hierarchy – other stuff we won’t get into.

Events Tab

The next tab along is the events tab where you link your audio to game events. You can define your event here (use the default work unit).
Once you got the event there you can associate the event with the audio in the Action List.

SoundBank Tab – this is the bit that get’s imported into your project.

Next you generate a SoundBank with Wwise that includes your audio and the code for the API calls to trigger sounds. You export that SoundBank into your game engine and link up the calls in your code.

To Get Started with Wwise

To get started make an account with Audiokinetic and download the Wwise Launcher. The Integration package for Unity can be downloaded and installed directly from the WWise Launcher.

In the Wwise Launcher there is a WWISE tab that you can install and start the application from. Once you open it up you need to register your project within the launcher so Wwise can track you 🙂 ( click on the key icon next to your Wwise project and select ‘Register your Project to obtain a License’). Wise will run in Trial mode which restricts the SoundBank content to 200 media assets and cannot be used for Commercial purposes. Pricing for licensing is on their site but this is not a sales piece so if you want it you can look it up.

There are a bunch of plugins by Audiokinetic and their partners available and also Community offerings like AudioRain a dedicated rain synth with 60 procedurally generated presets for rain. What’s not to love about that!

There is a Wwise SDK for authoring your own plugins and a Wwise API which allows you to integrate into any engine, tool or application.

Audiokinetic do certifications that covers audio integration workflows,
mixing virtual soundscapes, working with sound triggering systems, and performance optimisation :
https://www.audiokinetic.com/learn/certifications/

Basically in Wwise you let the Launcher do all the setting up for you. You will install the Wwise binaries from here and manage your platform versions. Projects can be integrated here and if you don’t have the necessary plugins installed the Wwise Launcher will install them for you.

Integrating the MusicVisualiser project with Wwise.
This is how big the Wwise Integration packages and binaries are.
Applying…
Done!

That’s basically it for the set up of Wwise and Integration with your Project. Next up we will have a look at what this has done to the Unity Console.

Wwise in Unity

First thing we see is a bunch of errors that can be safely ignored. As we did not perform any configuration of our project in Wwise with audio files and events there was no SoundBank to generate yet.

Unity – Initial Errors can be ignored if you have not generated your SoundBank yet.

In the Unity Console we have a new tab in our editor. The Wwise Picker Tab contains all the elements of the Wwise project that have been imported with the project integration. There is also a Wwise Global Game Object in the Unity Hierarchy and all the Wwise folders in the Assets folder.

Unity Editor
The WwiseGlobal Game Object

Under the Component pull down there is a whole slew of Ak (AudioKinetic) options.

Wwise Components.
Wwise Configuration Settings.

I know there has been a lot of “show and tell” in this post but I’m going to keep going and show the process of importing the audio into the Wwise Project, creating Events, and Generating the SoundBank.

Working in Wwise

In the Wwise Project Explorer I right click on the Default Work Unit and import the audio files that were part of my project. (I’ve stripped the raw files out of my project for now and removed all the Mixer components and etc.).

Importing Audio Files into the Wwise Project.
This is what the files look like.
Right click on the file to create a new Event (which can be called in the Unity code).
Here is the event created for “Play”.
And all my “Play” events.

Finally a SoundBank is generated from which the Unity project can access the sound files through the AudioKinetic API.

Generating a SoundBank

Wwise Audio in Unity

When we go back to our Unity Editor and Refresh the Project and Generate SoudBanks we are presented with the following in the Wwise Picker. We can now access these files and and drag them on to our game objects directly. It’s that simple. Drag a sound from the Picker onto a Game Object and it automagically creates a component that is immediately accessible from within the editor.

The new audio imported into the Wwise Picker.

Below the Play_Underwater_AAS event and audio file has been added to the Sphere Game Object.

The Trigger, Actions, and Callbacks can all be configured and access through the API. In my case I easily integrated the functionality I wanted with only one line change to my attached PlayMe.cs script that we looked at above. So now instead of my audio coming from the AudioSource component referenced by my_sound the audio is played by the AKSoundEngine.PostEvent.

            //my_sound.Play();
            AkSoundEngine.PostEvent("Play_Underwater_AAS", this.gameObject);

Actually getting Wwise installed and set up and integrated with my Project was very very easy but not without bumps. It takes a very long time for packages to download and I had a bit of trouble upgrading my Wwise Launcher from an old version (it got stuck! and I had to remove it by hand and re-install). When I did have issues I got some very excellent help from AudioKinetic and after logging a case was emailed directly by a real person (which honestly was so surprising and wonderful to get that kind of support from a company when I’m on a trial license with no formal support agreement or rights).

So lets have a look at the differences in performance and package size. The first thing you notice with the Profiler below is that there is very little difference in performance but we can no longer see our audio stats as it’s been abstracted away from the Unity Engine. The Graph still shows the resources being used by Audio and the Total Audio CPU seems to be up to a third lower than the native Unity Audio statistics. It looks like it’s being clamped at just over 1.2. MB instead of regular peaks over 3 MB.

Profiler with Wwise Audio running.

The Build Report is only a couple of MB larger for the total project size:

Build Report
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 0.0 kb 0.0%
Shaders 188.0 kb 0.5%
Other Assets 7.3 kb 0.0%
Levels 38.5 kb 0.1%
Scripts 1.3 mb 3.1%
Included DLLs 3.9 mb 9.7%
File headers 13.2 kb 0.0%
Complete size 40.5 mb 100.0%

Basically a 2 MB difference! The Sounds have been extracted away as a file in the Build Report and we assume they are now part of “Other Assets” above.

I’m kinda blown away by how how little additional file size there is to the build considering the additional libraries code and available complexity that Wwise adds. There is literally a plethora of options and effects that we can play with in the Wwise package. It’s a bit like the excitement I got after the install of my first real Audio DAW. The scope is part boggling and part fantastical wonder at where we can go next. (Audio does get me unusually stimulated but that’s to be expected and tempered accordingly).

The questions I wanted to answer with this whole experiment was 1. Would including an audio middleware like Wwise make my Project more complex and difficult to manage? 2. Would the added Package make my build much larger? and 3. Would the performance of the Audio package be as good as the simple Unity Audio API? The answers are: No. No, and Yes. So I’m pretty happy with that and if the cost point of using the licensed version of Wwise is balanced out against the advantages of using it in the total cost of the Project then I would most definitely one hundred percent go for it.

Preparing to Draw

Hi Gene here…

This post is about preparing for a good drawing. Drawing is the fundamental skill for all (or most) forms of art. I’d like to be better at it but I really only just get by…

But still you have to prepare to do your best otherwise you are setting yourself up to fail. So this is the process or workflow that I find is the best for me. I’ll take you through the process and use a rough example drawing as we go.

A lot of these ideas come from reading Andrew Loomis and Walt Stanchfield. (I cannot recommend Gesture Drawing for Animation enough for reading about drawing rather than actually doing drawing.)

I often think of this quote (well paraphrase…) from Andrew Loomis when I start out to draw something:

“You must have a desire to give an excellent personal demonstration of ‘Ability’ coupled with a capacity for unlimited effort that hurdles the difficulties that would frustrate lukewarm enthusiasm.”

The Idea

To begin with every drawing starts out with a message or purpose or job to do. The Idea or Emotion of the Drawing. First and foremost you have to draw an idea. Every object that you put in your drawing is an elaboration of that idea.
Your idea has to be an action (or verb – a “doing” word) but the vehicles of that action are the things/objects in your drawing. Those things can be a figure, ten figures, a dog, a house, a tree, a swirling galaxy, or whatever.
If it’s a figure then the pose, the anatomical structure, etc. have to portray that idea. In every drawing you have to find that emotion of the idea. It’s a bit of a nebulous concept but I don’t have any other way to describe it.

For example in figure drawing the essence of the idea is all the outward manifestations of that internal emotion. Every moving part and direction portray the motive and mood of the drawing. Your character has to be responding characteristically to some real or imaginary motivation.
To quote Stanchfield:
“These are basic human emotions such as joy, sorrow, anger, tenderness, submission, domination, fear, surprise, distress, disgust, contempt, and shame.”

The second part to this idea is the story. What happens next. There’s no need for a whole story to be crammed into one drawing all you need is you figure doing something or reacting to something in a “characteristic” way for who they are supposed to be.

Preparation to Draw

As I said before you have to prepare to make a good drawing.
It usually doesn’t just happen if you just start drawing.

This is the best process I’ve found that does just that.

It starts with Mental Preparation or Rough Sketching.
You have to answer these sorts of questions about what you want to draw:
What is the idea?
What is your pose?
Is it the extreme of the action?
Is there an action and a re-action?
What is the visual depth?
Is there a primary and secondary action?
What is the “stage” for the action?
What is the anticipation? (What is just about to happen?)
Will you use caricature?
What details will you include?
What objects will you use?
Do the objects have a texture?

Once you have worked your way through those questions try starting your first drawing.

This one simplifies your idea and starts nutting out the technical execution.

This is the first sketch for my example drawing.

What is the idea? Piano Player immersed in playing. The idea is total absorbtion in the music.

What is your pose? Sitting one leg up tapping – hands flying. I want every action to be reinforcing that one-ness with the music.

Is it the extreme of the action? Not in the first sketch I did – that right hand could be up higher with the fingers poise like an eagle about to strike. He is supposed to be immersed in the action so his head could be down further or looking at the sky. The left leg is supposed to be horizontal and then on tip toes. Maybe it should poke out more to the left so you can see that outline instead of being hidden in the foreshortening.

Is there an action and a re-action? Not really – that right arm really needs to look like its at the top of it’s upward trajectory and is about to slam down. The shoulder could be either more hunched or raised up. The other shoulder needs to be stretched out like he’s really reaching for the low note. The tapping foot needs to be up as well and just about to come down. The left foot needs to be jittering about and only just holding his balance on that stool.

What is the visual depth? In the sketch it’s quite shallow. No background and a very close middle ground.

Is there a primary and secondary action? The primary is that hand. The secondary is the repeat of that in the foot and the hunching or lifting of the body.

What is the “stage” for the action? Is this a bar in a western or a jazz club or a luxury penthouse or a garrette? I think I’ll go for a down and out garrette. A total slum of a place that he is escaping with the music. I think I will change the format from landscape to portrait to hem him in and make room for a window.

What is the anticipation? (What is just about to happen?) His right hand is just about to crash down and peal out the most amazing lick while the left hand pumps the bass notes. The jittering and stomping foot are like the rhythm section.

Will you use caricature? I don’t think so.

What details will you include? What objects will you use?
Whisky glass! Shadows! Mood lighting. Other people? I don’t think so it’s all about him. Cigarette ash. Old stool and table lamp. Add a broken window and sliced up blinds behind him with a crappy part of the city and the moon overhead.

Do the objects have a texture? Woody piano, dirty floor,

This is where I got the inspiration the second mannequin looked like he was playing the keyboard.

First Drawing

Aim for Simplification.
Shapes and composition.
What are the most basic shapes (try and limit it down to three or max six) use the square, the rectangle, the circle or ellipse, and triangles.
Define the Scale and point of view.
(Which perspective are you using? How many vanishing points?)
Is there a Direction (or Flow)? (Beat or Rhythm.)
Is there Tension? Is it Extreme? (Use extreme poses and balance action and reaction to create tension.)
Where is the overlap? Which objects are in front or behind?
What are the positive and negative shapes?
What is the extreme pose? This usually means the farthermost extension of some pose just prior to a change of direction.
Your drawing should show, in a flash, what is happening in the pose.
Those extremes are vital to explaining the idea.
I’ll paraphrase Stanchfield again:
If the extreme pose is missing or diluted, the drawing will deteriorate from expressive to bland or confusing or boring.
The Silhouette almost explains “Extreme,” if it is not thought of as a tracing of the outside of the figure.
The extreme pose is generated by the forces at play in a gesture (the force and thrust and tension).

This is where I start playing with the basic shapes and setting up the perspective
(I use Carapace for perspective guides)
In this version I try and tighten up the figure a bit more

Second Drawing

The Second Drawing is about mass and the solid and flexible parts of the subject. It’s also about expressing the tension of the idea:
Model the figure/character/object roughly.
Give it weight and mass. (depth and volume.)
Use planes to provide solidity.
What is the weight distribution? (If it’s a figure – how it balances itself due to what it is doing.)
Thrust and Body Language. (It usually requires a limb to be thrust out – a hip thrust, or shoulder shrugged up, or knees apart, or arms out.)
Tension and Counterpart. (Whenever one member of the body moves set up a counter move with its counterpart.
Tension is captured when one elbow is working against the other or one knee against the other.
Feet, hands, hips and shoulders should always be in counter position.
Never draw one part of the body without drawing the counter move of its opposite at the same time …. never.
Use the solid and flexible parts of the body as the basis for the angles that portray the action.

Blocking in the main shapes in the body.
Starting to look OK

Third Drawing

Sometimes I’ll do a third drawing (or incorporate it into the second). This one concentrates on the line:
Define the line and silhouette.
Use arcs to define movement (and follow through).
Split it up by straights and curves.
Straights and curves when used logically can emphasise and clarify the gesture.
Straights and curves can be used for “squash” and “stretch”.
Further define the direction of the drawing – make all the elements come together to define the idea.

Concentrating on the lines

Fourth Drawing

This one is all about Perspective and Anatomy. Use it. Tighten it. Get your straights and curves to follow it.
Use Reference images and get it right.
Draw the bones first (in perspective) or a rough skeleton. Get the perspective right now. Then do surface form.
Model the muscles or flesh.
Focus down on parts.
Also textures – what parts have what texture or shading etc.

These are some references I used
Grey scale painting
Starting to Colour

Fifth Drawing

Draw everything again!
But this time picking the best bits of all drawings.
Concentrate on line quality.
Concentrate on tone.
Concentrate on light.

Drawing

Finally a few notes about Drawing from Life.
Everywhere you go take a sketchbook.

When you draw try to first concentrate on color.
Then switch to dark and light (tone or texture),
then to masses,
then to the three-dimensional qualities of things near and far.
Now, try to see all of those things at once.

Finally an inspirational quote from Stanchfield:
“Carry a sketch book—a cheap one so you won’t worry about wasting a page. Sketch in the underground, while watching television, in pubs, at horse shows. Sports events are especially fun to sketch— boxing matches, football games, etc. Draw constantly. Interest in life will grow. Ability to solve drawing problems will be sharpened. Creative juices will surge. Healing fluids will flow throughout your body. An eagerness for life and
experience and growth will crowd out all feelings of ennui and disinterest.
Where are you going to get all this energy, you ask? Realize that the human body is like a dynamo, it is an energy producing machine. The more you use up its energy, the more it produces. A work-related pastime like sketching is a positive activity. Inactivity, especially in your chosen field, is a negative. Negativity is heavy, cumbersome, debilitating, unproductive and totally to be avoided. Take a positive step today. Buy a sketch book and a pen (more permanent than pencil), make a little rectangle on the page and fill it with a simple composition.”

It starts with Mental Preparation or Rough Sketching.

Carapace

This is Carapace a tool designed by Epic Games and made available for free. I find it very useful. The link on their website is broken but if you search for it it’s still around. One source you can get it is here: https://www.florianhaeckh.com/blog/carapace

Unity 2D Curves using Triangles

Hi Xander here….

I know I shouldn’t be spending time doing this sort of stuff when I got games to make but I got really sidetracked with this little brain boiler. I got the idea while doing some maths research and came across an image of a cat’s cradle spun in a triangle. The way the lines joined made a perfect curve and I really liked the idea of doing something like that for making custom curves in games. I know the idea is probably not original and there has got to be some better implementations out there but once my noodle started working on this I got a little obsessed with seeing it through to the end.

I have written before about making curved movement by using sin functions and still think that’s a pretty cool way to do it. You can read about it here: (Fly Birdy Fly! 2D Curved Movement in Unity). But this is a much more intuitive way to get the perfect curve you want and very easy to plot and track the path of movement without having to guess.

This is how it works…

You take the three points of a triangle. I was thinking of something like a cannon shot, or lobbed object, or a flying arrow to start with so I called them Source, Height and Target. You measure the distance between those points and make lines to form a triangle. Then you cut those lines into equal points and start joining one point on one line to another point on the other line all the way down the length. It’s easier to explain in an image:

Building a Triangle and “Cat’s Cradle” lines to make a curve!

Now for the Mathy part… Once you get those lines drawn you use algebra to find the intersection point of one line and the next to get your curved path! Every additional line crosses the one before and by finding that point where they cross you get a list of points that make a curve.

Simple curves only need a few lines.

Five Lines

The more lines you use the smoother your line is…

Ten Lines
Twenty Lines

Start moving around those points of the triangle and it becomes really easy in the Unity Editor to Map and draw custom curves. This kind of blew my mind.

Different Types of Curves

Here we have a number of different curves all just by making a few tweaks to the position of those three points of the triangle. I’ve used the intersecting points to draw a parabolic line on the game scene below.

Here are few of the same images zoomed in (in case you are reading on your phone).

The Code

I’ll put the full script at the bottom of the post but for now I’ll work through the code a little bit.

If you want to copy the script you need to attach it to a GameObject that you want to move (or if you want to draw lines you need to attach it to a GameObject with a Line Renderer).

The script has a number of check boxes exposed in the editor which lets you control the movement and drawing functions as well as resetting and applying changes after moving the triangle’s points.

The only other variable that you can play with is the timeToHit float. This number controls how many lines you want to use to create the curve. Remember: The more lines the smoother the movement but the higher processing. (That said I’ve yet to do any serious profiling of the script but haven’t found any real performance hits yet).

Much of everything else is public so you can see what’s going on inside all the Lists and Arrays.

… … … (Editor View)

Defining the Triangle

First of all we get the positions of the three triangle points and find the length (Magnitude) of the lines between them using normal Vector maths.

Then we divide those lines by the number of strings we want to have (timeToHit) and work out the relative size of each one:

        Vector3 X_line = source - target;  
        X_line_length = Vector3.Magnitude(X_line);
        Vector3 Y_line = height - source;
        Y_line_length = Vector3.Magnitude(Y_line);
        Vector3 Y_Negline = target - height;
        Y_Negline_length = Vector3.Magnitude(Y_Negline);

        X_line_bit_x = (height.x - source.x ) / timeToHit;
        X_line_bit_y = (height.y - source.y) / timeToHit;
        Negline_bit_x = (target.x - height.x) / timeToHit;
        Negline_bit_y = (height.y - target.y) / timeToHit;

Get the Points Along Each Line

Next we iterate through all the points on the lines and make a pair of Lists (one for the forward or positively sloping line and one for the negatively sloped line):

        for (int i = 0; i < timeToHit + 1; i++)
        {
            P_lines.Add(new Vector3(Px, Py, 0f));
            Px += X_line_bit_x;
            Py += X_line_bit_y;

            Q_lines.Add(new Vector3(Qx, Qy, 0f));
            Qx += Negline_bit_x;
            Qy -= Negline_bit_y;
        }

Get Intersection Points

Getting the intersection points was much easier to do in 2D but is totally achievable if you wanted to extend it to 3D. We pass in our start and end points on each line (x and y coordinates) and return the intersection point (and convert it back to a Vector3):

            myPoint = findIntersectionPoints(
                (new Vector2(P_lines[i].x, P_lines[i].y)), 
                (new Vector2(Q_lines[i].x, Q_lines[i].y)),
                (new Vector2(P_lines[bc].x, P_lines[bc].y)), 
                (new Vector2 (Q_lines[bc].x, Q_lines[bc].y)));
            Vector3 myPoint_3 = new Vector3(myPoint.x, myPoint.y, 0f);
            IntersectionPoints.Add(myPoint_3);

(If you want to do more than idly read about this stuff have a look at Math Open Ref for more information on the functions for finding the intersection of two lines. I promise it’s actually really interesting.)

The maths bit:

float P1 =(Line2Point2.x - Line2Point1.x) * (Line1Point2.y - Line1Point1.y)
        - (Line2Point2.y - Line2Point1.y) * (Line1Point2.x - Line1Point1.x);

float P2 = ((Line1Point1.x - Line2Point1.x) * (Line1Point2.y -Line1Point1.y)
 - (Line1Point1.y - Line2Point1.y) * (Line1Point2.x - Line1Point1.x)) / P1;

return new Vector2(
            Line2Point1.x + (Line2Point2.x - Line2Point1.x) * P2,
            Line2Point1.y + (Line2Point2.y - Line2Point1.y) * P2);

That’s about it for the tricky stuff. There is a function to draw a line along the curved path and a function to move the attached object along the path as well. Add in a few Gui functions for displaying the pretty stuff in the scene view and you are done.

Moving the Green Sphere

This is an example of the script running in the editor that shows the scene view with the OnGui helper lines and then switches to the game view where I use the function to draw a curve and then move the green sphere along that path.

Full Script:

Here is the full script…enjoy!

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CurveFunction : MonoBehaviour {
public bool resetMe;    // Use these to manage the screen display
public bool updateMe;
public bool drawMe;
public bool moveMe;
public GameObject Source;  // The three points of the triangle
public GameObject Target;
public GameObject Height;
public Vector3 source;  // The three points of the triangle
public Vector3 target;
public Vector3 height;
public float timeToHit;     // A variable use to split the lines of the triangle into equal parts
public int targetreached = 0;
public float X_line_length;     // The length of the horizontal line between source and target
public float Y_line_length;     // length from source to height
public float Y_Negline_length;  // length from height to target (Negative slope of the triangle)
public float X_line_bit_x;      // the x and (below) y points of the X_Line.  
public float X_line_bit_y;
public float Negline_bit_x;     // the x and (below) y points of the Negline.
public float Negline_bit_y;
public float[] X_line_bit_xs;
public float[] X_line_bit_ys;
public float[] Negline_bit_ys;
public List<Vector3> P_lines = new List<Vector3>();     // A List of points on the Y_Line 
public List<Vector3> Q_lines = new List<Vector3>();     // Same for the Negline
public List<Vector3> IntersectionPoints = new List<Vector3>();  // Where two lines cross
public float Px;        // Used as shorthand for points on the lines when calculating
public float Py;
public float Qx;
public float Qy;
public bool isFound;
public float speed;         // Used for Draw function
public LineRenderer lineRend;
public int bc;
// Use this for initialization
void Start () {
source = Source.transform.position;
height = Height.transform.position;
target = Target.transform.position;
getPointsOnTriangle();
Px = source.x;
Py = source.y;
Qx = height.x;
Qy = height.y;
makeLineArrays();
}
// Update is called once per frame
void Update () {
if (updateMe)
{
getPointsOnTriangle();
makeLineArrays();
updateMe = false;
}
if (moveMe)
{
MoveMe();
}
if (drawMe)
{
drawLines();
}
if (resetMe)
{
ResetMe();    
}
}
void getPointsOnTriangle ()
{
source = Source.transform.position;
height = Height.transform.position;
target = Target.transform.position;
// Define the lines of the triangle and get their lengths
Vector3 X_line = source - target; 
X_line_length = Vector3.Magnitude(X_line);
Vector3 Y_line = height - source;
Y_line_length = Vector3.Magnitude(Y_line);
Vector3 Y_Negline = target - height;
Y_Negline_length = Vector3.Magnitude(Y_Negline);
// Time to hit is not really a time but an increment of how many times we want to cut the line into 
// chunks to make the lines from. The more lines the better the curve points but more processing.
X_line_bit_x = (height.x - source.x ) / timeToHit;
X_line_bit_y = (height.y - source.y) / timeToHit;
Negline_bit_x = (target.x - height.x) / timeToHit;
Negline_bit_y = (height.y - target.y) / timeToHit;
// Handy handlers of the x and y values of the source and height.
Px = source.x;
Py = source.y;
Qx = height.x;
Qy = height.y;
}
void makeLineArrays()
{
for (int i = 0; i < timeToHit + 1; i++)
{
P_lines.Add(new Vector3(Px, Py, 0f));
Px += X_line_bit_x;
Py += X_line_bit_y;
Q_lines.Add(new Vector3(Qx, Qy, 0f));
Qx += Negline_bit_x;
Qy -= Negline_bit_y;
}
makeIntersectionPoints();
}
public void makeIntersectionPoints()
{
bc = 0;
Vector2 myPoint = Vector2.zero;   // It's a bit easier to do 2D. So convert.
for (int i = 0; i < timeToHit; i++)
{
if (bc < timeToHit)
{
bc++;
}
myPoint = findIntersectionPoints(
(new Vector2(P_lines[i].x, P_lines[i].y)), 
(new Vector2(Q_lines[i].x, Q_lines[i].y)),
(new Vector2(P_lines[bc].x, P_lines[bc].y)), 
(new Vector2 (Q_lines[bc].x, Q_lines[bc].y)));
Vector3 myPoint_3 = new Vector3(myPoint.x, myPoint.y, 0f);
IntersectionPoints.Add(myPoint_3);
}
IntersectionPoints.Add(target);
}
public Vector2 findIntersectionPoints(Vector2 Line1Point1, Vector2 Line1Point2, Vector2 Line2Point1, Vector2 Line2Point2)
{
float P1 = (Line2Point2.x - Line2Point1.x) * (Line1Point2.y - Line1Point1.y)
- (Line2Point2.y - Line2Point1.y) * (Line1Point2.x - Line1Point1.x);
float P2 = ((Line1Point1.x - Line2Point1.x) * (Line1Point2.y - Line1Point1.y)
- (Line1Point1.y - Line2Point1.y) * (Line1Point2.x - Line1Point1.x)) / P1;
return new Vector2(
Line2Point1.x + (Line2Point2.x - Line2Point1.x) * P2,
Line2Point1.y + (Line2Point2.y - Line2Point1.y) * P2
);
/// Code modified from: https://blog.dakwamine.fr/?p=1943  
/// (Thanks for the leg up!)
}
public void drawLines()
{
lineRend.positionCount = 0;
Vector3[] positions = new Vector3[Mathf.RoundToInt(timeToHit) + 1];
for (int i = 0; i < timeToHit + 1; i++)
{
positions[i] = IntersectionPoints[i];  // Draws the path
}
lineRend.positionCount = positions.Length;
lineRend.SetPositions(positions);
drawMe = false;
}
public void MoveMe()
{
if (transform.position != IntersectionPoints[targetreached])
{
float step = speed * Time.deltaTime;
transform.position = Vector3.MoveTowards(transform.position, IntersectionPoints[targetreached], step);
}
else
{
if (targetreached != IntersectionPoints.Count)
{
targetreached++;
}
}
if (transform.position == Target.transform.position)
{
moveMe = false;
}
}
public void ResetMe()
{
transform.position = source;
targetreached = 0;
X_line_length = 0;
Y_line_length = 0;
Y_Negline_length = 0;
X_line_bit_x = 0;
X_line_bit_y = 0;
Negline_bit_x = 0;
Negline_bit_y = 0;
X_line_bit_xs.Initialize();
X_line_bit_ys.Initialize();
Negline_bit_ys.Initialize();
P_lines.Clear();
Q_lines.Clear();
IntersectionPoints.Clear();
Px = 0;
Py = 0;
Qx = 0;
Qy = 0;
moveMe = false;
resetMe = false;
}
void OnGUI()
{
GUI.Label(new Rect(10, 10, 140, 20), "Source: " + source);
GUI.Label(new Rect(10, 30, 140, 20), "Target: " + target);
GUI.Label(new Rect(10, 50, 140, 20), "Height: " + height);
}
void OnDrawGizmos()
{
Gizmos.color = Color.red;
Gizmos.DrawWireSphere(source, 0.2f);
Gizmos.DrawWireSphere(target, 0.2f);
Gizmos.DrawWireSphere(height, 0.2f);
Gizmos.color = Color.green;
Gizmos.DrawLine(source, target);
Gizmos.DrawLine(source, height);
Gizmos.DrawLine(height, target);
UnityEditor.Handles.Label(source, "SOURCE");
UnityEditor.Handles.Label(target, "TARGET");
UnityEditor.Handles.Label(height, "HEIGHT");
Gizmos.color = Color.yellow;
// Uncomment to see lines in editor
for (int i = 0; i < timeToHit + 1; i++)
{
Gizmos.DrawLine(P_lines[i], Q_lines[i]);
}
}
}

Xander out.

The Coin Flip

Ah the coin flip! Simple easy and fun. The mechanism is a platformer mainstay. You run over a spinning coin (it glitters, it calls you) it pops into the air, and it’s yours!

The Endless Elevator Coin Flip !

This is how we do it…

There is a Rigidbody and Collider on both the coin and the player character. You can clearly see the spinning frame of the coin collider in the .gif above.

The collider acts as a trigger which is being listened for by our script which executes the “pop”.

The coin has a spinning script (and also a magnetic feature as a bonus for later).

The player has a couple of behaviours that handles the trigger and action.

The coin scripts – note the use of the slider to get the spinning speed just right.

Here we have the coin’s Rigidbody and Collider settings:

The Rigidbody isKinematic and the Collider is a Trigger

This is the script we use for spinning:

public class spinCoin : MonoBehaviour {
[Range(0.0F, 500.0F)]
public float speed;
// Update is called once per frame
void Update () {
transform.Rotate(Vector3.up * speed * Time.deltaTime);
}
}

Simple and sweet.

This is the function that that handles the collision and the pop into the air! (It’s part of our character behaviours).

    void OnTriggerEnter(Collider otherObj)
{
if (otherObj.name == "Coin(Clone)")
{
coins++;
var coin_txt = coins.ToString();
coinsText.text = "Coins: " + coin_txt;
Rigidbody riji = otherObj.GetComponentInParent<Rigidbody>();
riji.useGravity = true;
riji.isKinematic = false;
riji.AddForce(Vector3.up * 40f,  ForceMode.Impulse);
Destroy(otherObj.gameObject, 0.4f);
}
}

First of all we are incrementing our coin total variables and screen display.

The mesh for the coin is part of a child component so we need to call the Rigidbody attached to the parent object.

We set gravity to true so that it falls back down after adding the force and set isKinematic to false so we can use it’s mass to fall.

After a very short flight we destroy it (0.4 seconds).

As an added bonus here is the other coin behaviour for when a magnet power up is used in the game.

    private void OnTriggerEnter(Collider col)
{
colName = col.gameObject.name;
float step = speed * Time.deltaTime; // calculate distance to move
if (colName == "chr_spy2Paintedv2")
{
transform.position = Vector3.MoveTowards(transform.position, col.gameObject.transform.position, step);
}
}

This is when we make the collider really big on the coin. If the player gets in range of the collider then the coin moves like a magnet towards him. Which is kinda fun when there is lots of coins around and you feel like a millionaire just by standing around.

MagicaVoxel-Blender-Unity Workflow

Hi Trixie here….

We had a good break from the build cycle with the Text Adventure framework and since then have been making lots of fun headway on the main game in development Endless Elevator.

Endless Elevator is, as the name suggests, an endless runner style of game. It’s played in the vertical axis and follows the Good Cop as he scales the heights of an endless building shooting down the bad guys, climbing stairs, and catching elevators.

We have the main game functionality finished to a point so we started working on background objects and some cute little buddies for the Good Cop. It’s puppies…ain’t they cute!

This is not about the puppies though. This is about the workflow we have been using for creating assets using MagicaVoxel and making them game engine ready using Blender before importing them into Unity.

Lets start with MagicaVoxel. A 3D voxel editor that is free (no commercial license required) 8 bit and super awesome. Credits to the software are appreciated (e.g. “created by MagicaVoxel”) – like what I did there just like that! All the assets for the Endless Elevator have been made with MagicaVoxel. The walls, the floors, the furniture, and the characters.

First we model and then we paint in MagicaVoxel. For example this table lamp:

When we are done modelling and painting we export it as an .obj file that also produces a .png of the palette mesh mapping. It’s a bit like using the UV Unwrap in Blender but much harder to manually map or see.

Once we are done with MagicaVoxel if we want to optimise we import the .obj file into Blender. Blender is a terrific open source 3D modelling (and more) software. There is a trade-off here… we use Blender to lower the poly count on complex objects by using the decimate modifier. This modifier basically takes a parameter in your vertexes (like the angle between edges) and reduces the vertex count by simplifying the model. You see the problem with MagicaVoxel is that it created edges from a fixed point which can make lots of thin triangles.

Have a look at this model of the lamp imported into Blender:

You can see all the sharp angles of the triangles there. This is how MagicaVoxel works under the hood and it’s great for the internal workings of that program and is very efficient when working in that app but it sucks a bit for making complex models that you want to import into a game engine.

This is the Decimate modifier in Blender that we use to simplify this topology. We tell the modifier to take the Planar (faces) and simplify anything that has an angle under 25 degree.

We are left with something like this: (below)

This is much simpler and super easy for the game engine to understand and render.

The trade off here is that when you decimate all the vertices you lose your UV mapping for the paint work you might have done with MagicaVoxel. These are the limitations of working with awesome freeware. Sure they are awesome but if you shell out a few hundred (or less in some cases) for different Voxel modelling software you can get away with not having to work around these problems. But welcome to the world of no-budget game making. Hacking through the workarounds is part of the fun. Plus you actually learn a bit while you are working it out.

So in our game Endless Elevator we use a lot of small models (ie. not complex) and import them straight from MagicaVoxel and use their paint system and resulting exported image files for making the materials (albedo component). If we have more complex models that we want to simplify, like the walls and lifts in the surrounding building, then we import into Blender and do some optimising. Once the complex models have been optimised then we unmask them and paint the UV’s using GIMP. Next when they are imported into Unity we either add a material with the coloured UV mask we painted up or use Unity’s in built colour system for large areas.

There is another problem with using MagicaVoxel to make your game assets and that is on more complex models the “normal” of a face are often flipped the wrong way round. This one is kind of easy to spot and not that much fun to re-mediate. If you have a look at our character below you can see his shadow being projected on to the wall behind him.

Oops – he’s got big holes in him. You cannot see it on the model and it’s really only a problem if you are looking for it and using lots of hard lighting. In a 3D model each mesh forms a face and that face has two sides. In Unity’s default Shader only one face (the forward one) is rendered. So when MagicVoxel flips a few faces here and there (they are very small usually) you get these gaps that do not block the light in a shadow. It’s pretty hard to show in an image but what we have below is the model imported into Blender where we can expose the normals (the direction the face is facing!) and see the issue. In this image we have clipped the camera hard so that we can see into the cavity inside the model. Normals show up as light blue lines. You can see a few of them poking the wrong way into the center of the model instead of the outside. You can play around with the “flip normals” feature in Blender to fix these issues but it’s a lot of fiddling that frankly I have not had the patience or need to do yet!

So these are just a few of the issues and workarounds we use with this workflow – I hope you enjoy reading about it and if you have any questions feel free to comment 🙂

Trixie out.

Unity Hinge Joint

Hi Xander here…

For our game Endless Elevator, which is in development, we have a bad guy who is a knife thrower. I know nasty. In keeping with the blocky style of the characters in the game only his throwing arm moves and the rest of him is rigid. We decided to use the Unity inbuilt Hinge Joint and “spring” feature to simulate the throwing action. It turned out to be really easy to implement but hard to control perfectly. This is the story of how it all hangs together.

This guy below with the creepy eyes and beard is our knife throwing guy. You can see the top level empty Game Object called KnifeSpy_Package and two child objects (one for his body mesh the other for his arm which is separate). You can just make out the orange arrow of the Hinge Joint near his shoulder but more of that below.

This is his arm object. He’s holding a knife now… but soon we are going to teach him how to throw it (kinda).

You can see the Hinge Joint attached to the arm object here as a red arc around the Z axis. The arc of the Hinge Joint has been limited to just the angle that he needs to raise the arm and bring it back down in a throwing action.

Here is what the Hinge Joint looks like in the Editor. That Connected Body is the main figure of the character. The arm mesh that this script is attached to also has a Rigidbody component and must be set to “Use Gravity” for the Spring to work.

You can see where we set the limits for the arm axis in the bottom of the object there. We access the Use Motor boolean from our script to turn that feature on and off. When it’s on a spring winds up the arm to it’s firing position and when he throws that spring is released shooting his arm back down in a throwing arc.

This is what our script looks like in the editor:

The Target Angle is the height of the arm as it raises to throw. When the Player is in range and we are facing him if the arm has been raised above the target angle we can throw the knife. We can use this to tweak the throw. You can see in the example below our arm doesn’t really come up high enough so we can use this setting to fix what it looks like on the fly.

The ‘X’ is exposed in the editor to help with that process so that we can understand what’s going on with that setting without having to click around in the editor to see it against the arm transform.

The Knife Prefab is the knife mesh and the Knife Transform is an empty game object used to define the spot where we instantiate the Knife Prefab.

The Knife is instantiated with some Force and there are booleans to control if we can shoot and if the wait between shoots has completed.

This is what the script looks like as code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class KnifeHinge : MonoBehaviour {
private HingeJoint hingeKnife;
public Vector3 targetAngle;
public float x;    // Debug in Editor
public GameObject knifePrefab;
public GameObject knifeTransform;
public float force;
public bool canShoot;
public bool waitDone;
private Transform playerTransform;
public Transform parentTransform;
// Use this for initialization
void Start () {
hingeKnife = GetComponent<HingeJoint>();
playerTransform = GameObject.FindWithTag("Player").transform;
}
// Update is called once per frame
void Update () {
if (Mathf.Abs(playerTransform.transform.position.y - (parentTransform.transform.position.y)) < 1)  // If the player is on the same level as you
{
if (Mathf.Abs(playerTransform.transform.position.x - parentTransform.transform.position.x) < 8)  // If he is within 16 units from you
{
var lookPos = playerTransform.position;
var rotation = Quaternion.LookRotation(lookPos);
parentTransform.transform.LookAt(lookPos);
if (waitDone)
{
canShoot = true; // Can shoot is only true if you are looking at the Player (on the same level and 16 units away)
}
}
}
}
void FixedUpdate()
{
x = hingeKnife.transform.rotation.eulerAngles.x;   // this is for debugging the angle of the arm and hinge in the editor easily
if (canShoot)
{
if (hingeKnife.transform.rotation.eulerAngles.x > targetAngle.x && hingeKnife.transform.rotation.eulerAngles.x < (targetAngle.x + 10f))  // this is set to 295 it goes up to 297 (If your arm is all the way up)
{
hingeKnife.useMotor = false;
var theKnife = (GameObject)Instantiate(knifePrefab, knifeTransform.transform.position, knifeTransform.transform.rotation);
theKnife.GetComponent<Rigidbody>().AddForce(-force, 0, 0, ForceMode.Impulse);
Destroy(theKnife, 2.0f);
canShoot = false;
waitDone = false;
StartCoroutine(WaitAround(2f));  // shoot every 2 seconds
}
}
}
private IEnumerator WaitAround(float waitTime)
{
yield return new WaitForSeconds(waitTime);
waitDone = true;
hingeKnife.useMotor = true;
}
}

This is what the whole thing looks like put together:

As I mentioned above there is a bit of tweaking to get it to look right – but this post is about the process of putting everything together and how the components work to achieve the effect.

This is what it looks like after the tweaking:

I hope you found this interesting enough – if you did and want to read more about this stuff I did a post a few weeks back about how we do the guns in this game Endless Elevator:
http://www.zuluonezero.net/2019/03/15/unity-how-to-do-guns/

Enjoy.

Unity How To Rotate

Hi Xander here. I kept making mistakes and wasting programming time when it comes to Transform.Rotations. It can be hard to remember what all the inbuilt Unity Vector commands do and when it’s easier to use Quaternion.LookAt() or a different inbuilt command. so I created an example project that included common ways to handle a rotation and a graphical representation of them. I’m putting it up here in the hope that other people find it useful. The code and project files are in GIT here: https://github.com/zuluonezero/UnityRotation and I will give a description of what to do with it below.

All you need to do to get it working is to open a fresh project and remove any existing Camera’s and other items from the default Scene.

Then import the RotationsPrefab from GitHub and place it in the scene.

You will see that there is a cube set up in the middle with the Axis bars shown in green, red, and blue.

Have a look at the Cube object in the Editor and check out the script attached to it called RotationScript.

This is what it looks like:

This is the bit that deals with Vectors.
This is the bit that deals with Quaternions

Have a read through the code (most of which is not my own and comes from the Unity3d Manual examples for the functions called).

The script runs in the Editor but it’s better in Run mode (don’t maximize it or you cannot play with the script while it’s running).

You can use the Move Camera button to shift from 2D to a 3D view (3D is better).

Click Play and start messing round with the tick boxes and sliders of the Rotation script in the Editor. (Most of the tick boxes need to be unchecked after you have used them to move down the execute order of the script).

The first section plays with Vectors to rotate the cube and the seconds uses Quaternions.

There is a summary of what’s happening in the script on the screen but having the code open while you play helps understand what’s going on a little easier.

Here is a little demo of what it looks like when you play around with it.

I hope you get something out of this because even after a couple of years I still get this stuff wrong time and again and have to relearn it again. Hopefully now it will stick!

Xander out.

Unity How To Do Guns

Here at ZuluOneZero we love guns in games and other fantasy settings for their ability to embody drama, tension and action. They are a super power that takes us beyond the normal abilities to exert force. Sadly in real life guns suck and if you like shooting guns for real stay the heck away from me and my friends.

Anywhoo… this is how we use Guns in Endless Elevator to dispatch the Bad Guys and generally wreak havoc in an otherwise quietly innocent building.

This is our hero The LawMan! See he comes with that super powered lawmaster pistol, standard issue bullet proof vest, sherrif’s hat, and a make no mistakes we mean business moustache (can you tell he’s smiling under that?).

The LawMan

This is how he looks in the Unity Scene Editor. See those two Objects he has as Children “Gun” and “smoke”? They sit invisibly just where the 3D curser is in the image below…just at the end of the gun barrel. The Gun object is used as the spawn point for bullets and smoke is one of two particle systems that go off when the gun fires (the other particle system does sparks and is attached as a component directly to the Game Object).

There are four scripts that handle the basic Gun actions in our game. There are of course plenty of ways to do this – but this is the way we do it for this game. One script handles the aiming of the gun as part of the Character Controller. Another script attached to the Character handles the firing of the Gun and the spawning of the bullets. The Bullets have their own script that handles Gravity, Acceleration and Animations while alive and during Collisions. The last script attached to the Bad Guy handles the impact effects of the Bullets and the “blood”.

We wanted to keep the cartoon elements of gun violence in this game and get away from realism as much as possible. That said a bullet strike has a pretty huge impact and when we had a red coloured particle system for the blood it looked really gruesome. We changed the impact force to be over the top and super exaggerated so that it’s funnier reaction and moved the particle system color to yellow (might change it to stars later on). The Bullets are supposed to look like expanding rubber dum dum bullets so they grow out of the gun and enlarge a little bit in-flight. After a collision they start to shrink again and get very bouncy.

Here is a sample of game play where our hero blasts away at some chump.

So the Hero Character has a couple of scripts that handle aiming and firing.

The snippet below is the aiming component of the Character Controller. The Player has a wide Trigger Collider out the front that picks up when it hit’s a Bad Guy. If there are no _inputs from the Controller (ie. the Player stops moving) then he will automagically rotate towards the Bad Guy and thus aim the gun at him.

void OnTriggerStay(Collider otherObj)
{
if (otherObj.name == "bad_spy_pnt"  || otherObj.name == "Knife_spy")
{
if (_inputs == Vector3.zero)
{
Vector3 lookatposi = new Vector3(otherObj.transform.position.x, transform.position.y, otherObj.transform.position.z);
transform.LookAt(lookatposi);
}
}
}

Now once we are aiming we can fire (you can shoot at anytime – and straffing is pretty fun – but it’s far easier to hit if you stop and let the auto-aim work for you). For firing this is what the FireBullet script exposes in the Editor:

There is a “bullet” prefab which we will talk about below, the Gun transform (ie. bullet spawn point), the force at which the bullet is spawned with, the audio for the shot, and the particle system for the smoke (called PartyOn …. I know).

The script itself is pretty straightforward: The bullet is spawned at the Gun Transform with a direction and force, the gun noise goes off, and the particle system does smoke and sparks. After two seconds the bullet is destroyed. This is what it looks like:

using UnityEngine;
public class FireBullet : MonoBehaviour
{
public GameObject bulletPrefab;
public Transform bulletSpawn;
public float force;
public AudioClip fireNoise;
private AudioSource MyAudio;
public ParticleSystem partyOn;
public bool includeChildren = true;
void Start ()
{
MyAudio = GetComponent();
partyOn = GetComponent();
}
void Update()
{
if (Input.GetKeyDown(KeyCode.Space))
{
Fire();
}  
}
public void Fire()
{
partyOn.Play(includeChildren);
var bullet = (GameObject)Instantiate(bulletPrefab, bulletSpawn.transform.position, bulletSpawn.transform.rotation);
MyAudio.Play();
bullet.GetComponent().AddForce(transform.forward * force);
Destroy(bullet, 2.0f);
}
}

One of the really important lessons we learned from this script is to get the Transforms and X/Y/Z directions of your models imported into Unity in the right direction first up. We had a few different models for bullets over the last few weeks ranging from simple cylinders, to pillows and bean bags, and real bullet shapes. It makes it so much easier to direct objects if their rotations are correct to start with. For example we did one quick model of a cylinder but had it sitting on the Z axis instead of X so when we did the “forward” force the bullet would travel sideways.

This is how our bullet looks now:

This is how the script to handle it’s behaviours and the settings of it’s Rigidbody and Collider:

It’s got a Rubber material on the Collider so that it bounces around when Gravity is enabled in the Rigidbody on Collision. We disabled Gravity so that we could slow down the Bullet firing path and not have to use so much Force. Having a slow bullet adds to the cartoon drama, reinforces the rubber bullet idea, and looks less like killing force. Here is the script:

using UnityEngine;
public class BulletGravity : MonoBehaviour {
private Rigidbody rb;
public float force;
public float accelleration;
public bool collided;
public float scaleFactorUp;
public float scaleFactorDown;
public float maxScale;
public float bounceForce;
void Start () {
rb = GetComponent();
}
void Update()
{
if (transform.localScale.x  scaleFactorDown)
{
transform.localScale -= new Vector3(scaleFactorDown, scaleFactorDown, scaleFactorDown);
}
}
}
void FixedUpdate ()
{
if (!collided)
{
rb.AddForce(transform.forward * force * accelleration);
}
}
void OnCollisionEnter(Collision col) {
collided = true;
rb.useGravity = true;
rb.velocity = transform.up * bounceForce;
}
}

Below is the Collision part of the script that handles the BadGuy flying up into the air and bleeding all over the place.

void OnCollisionEnter(Collision col)
{
colName = col.gameObject.name;
if (colName == "bullet(Clone)")
{
hitpoint = col.transform.position;
GameObject BloodObject = Instantiate(BloodPrefab, hitpoint, new Quaternion(0, 0, 0, 0), TargetObject.transform) as GameObject;
Destroy(BloodObject, 5.0f);
//   We disable his AI script so he doesn't try and walk around after being shot :) It's not a zombie game.
var AI_script = GetComponent();
if (AI_script)
{
AI_script.enabled = false;
}
Vector3 explosionPos = transform.position;
explosionPos += new Vector3(1, 0, 0);
rb.AddExplosionForce(power, explosionPos, radius, offset, ForceMode.Impulse);
yesDead = true;
Destroy(transform.parent.gameObject, 2f);
}
}

Let’s break down the whole process visually. Here is our Hero in a standoff with a Baddy. He hasn’t aimed yet but you can see his big aiming collider in yellow underneath him extending out front and of course the character box colliders are visible too in green. (The Bad Guy has a yellow sphere collider on his head cause we make him get squashed by lifts!).

Here we are just after firing. His aiming script has put him on target and his bullet is there travelling with force and bloating in size.

And this is the impact with the force applied to the Bad Guy and his yellow blood impact stuff spewing out dramatically for effect.

All we got to do now is to clean up the Bad Guy’s object and destroy the used bullets. We don’t do any object pooling for the bullets yet…the overhead for the current system has warranted it but maybe later on.

That’s about it. Hope you’ve enjoyed this gun for fun expo-say.

Trixie out.

Endless Elevator – Show Reel

DevLog March 2019: This month we did a cheesy show reel of the latest changes in development for Endless Elevator.

This is a very early “In Editor” development sketch up of some of the features currently being included… Enjoy the show.

The game will be available on the mobile app stores in 2019.
For more information subscribe to our Dev Blog at: http://www.zuluonezero.net/zuluonezero-devblog/