2014/07/18

Shifting Priorities

A.V. is currently running a Kickstarter campaign.  So I’ll mention that A.V. is currently running a Kickstarter campaign.  Please stop by and at least give us a look.
 
tiny.cc/av_kickstarter
 
Thank you.  That is all.  Normal service will now resume.
 
The Way of Things
 
I’m very pleased to have grown up in an age when I could earn a college degree in game development.  Two of them, in fact.  The fact that game development exists as an academic curriculum is immensely satisfying.  If the traditionalists had had their way, I would have become a scientist.  Instead, I’m in the business of entertainment.  Take that, progress!
 
The trouble with an industry like game development being an academic curriculum, though, is that academic curricula tend to be focused on establishing conventions and standards.  After all, if you want to teach people how a system works, you need to have a clean example of what that system is.  Game development isn’t quite so clear-cut.
 
My education, for instance, made me aware of the numerous business models and methodologies that game development can follow.  Waterfall follows a different style and scheduling system than SCRUM.  But in the midst of this, there are supposed to be commonalities as to the definition of what constitutes a game’s Alpha version, or a Beta, or a Gold Master.
 
Prototype: Display the game’s core mechanic.
 
Alpha: A version featuring the preliminary layout of the full game structure from beginning to end.  Levels are boxed or filled with placeholder art.  Core mechanics are in place, along with any features necessary for completion of the game.
 
Beta: An asset-complete version of the full game.  All final game artwork and sound is in place.  All design mechanics are laid out in a functional form.  The game world is complete.
 
Gold Master: All mechanics are fully-functional and inefficient or incomplete code has been cleaned up.  The game is in its fully-completed, sales-ready form.
 
Post-Release: Additional material, such as DLC or patches, is released to further enhance the original game experience.  Also, you’re rolling around in gigantic piles of money.
 
Using the System
 
To that end, I’ve always tried to look at a schedule, no matter how open, in terms of how each of these states can be reached.  “Okay…we need to have all of our art assets in the game by the Beta…because that’s what a Beta is.  When do all of the art assets need to be completed?  When do they need to be in the game?  When do we need to work out the exact pipeline for putting them into the game?  When do we need to realize that everything in the art pipeline is going horribly wrong?”
 
If you always make the same type of game, this process can work pretty well.  However, from what I’ve been able to work out, most developers don’t operate under the hope that they’ll spend their whole lives making the same game over and over again so they’ll have an easy time with scheduling.  Rather, people like to work on something new and exciting.  The problem with a new and exciting project is that it’s…well…new.  It carries a different set of requirements than the project before it, and a different ordering of priorities needs to be considered in order to make the game as complete as possible.  This is especially tricky in the sort of clever, genre-bending indie games I’ve heard the kids talk about so much these days.
 
What, for example, goes into a prototype?  It’s supposed to be a piece of the game you want to make – a representative for the larger product.  A stranger should look at the prototype and say, “Ah, yes…I appreciate the entertainment value of this product.  I would be inclined to invest monetary units into the development and/or playing of the larger product upon which this work is representative.”  Or something along those lines.  Generally, you should hope for something a bit more enthusiastic.
 
Putting It into Practice
 
Okay.  Good.  So how do you actually make people say that?  Do you fill it with pre-existing placeholder art and focus on programming a mock-up of your core mechanic?  Do you design a really clever world, build it out of primitives, and use pre-existing scripts that only allow you to move around?  Do you make some customized art assets and particle effects to sell the environment, then go bare-bones with everything else?  Keep in mind, you don’t have long to do it.
 
It’s a question of what really “makes” your game.  If your game is largely about indulging in the epicness or quirkiness of the game world, the game’s art will be extremely important in selling that concept.  If your game is built around a new mechanic that no one has seen before, a correctly-designed test level and precisely-programmed actions and responses will take precedence in demonstrating the fun factor.  If you’re working on a follow-up to last year’s big online FPS war game, you’ll want to show that the system no longer boots you off the server if your name starts with “J”.
 
In developing A.V., we found ourselves running across a similar problem.  According to what I’d been taught in years past, the development process was supposed to look something like this:
 
Ideas -> Concept/Pitch -> Documentation -> Box Out Levels -> Test Core Mechanic -> Add More Mechanics -> Add Art Assets -> Clean Up -> Profit
 
This is the basic structure we initially tried to work from.  Very simple.  Very functional.  Easy to follow.  Okay, so we threw our ideas at the wall.  We came up with a concept and started to document it.  And around the same time, we started the prototype.  That’s when it became clear that things were going to have to be a little bit different.  Our structure has been a bit more like this:
 
Treachery and Lies -> Ideas -> Concept -> Sketch Levels -> Pitch -> Concept -> Pitch -> Documentation -> Sketch Levels -> Make Scripts Do -> Make Art Assets -> Make Level -> Make Scripts Do -> Make Art Assets -> Make Tutorial -> Make Tutorial -> Make Tutorial -> Success! -> Make Level -> Fix Scripts -> Make Art Assets -> Make Level -> Stop Everything to Make Kickstarter -> … -> Profit??
 
Breaking the Process
 
The problem lies in the nature of the concept itself.  A.V. is a game concept built around an overall experience – being part of a musically-based computer world in which you can only interpret your world through the use of sound.  The mood and feel of the game are its key selling points.  Getting the correct mood across is dependent on capturing the right art style, the right sound effects, and on generating a successful musical structure in the game’s actions. 
 
At this stage, design and programming resources would normally be coming together to determine the best statistics to be associated with player actions.  How far and high do we want the player to be able to jump?  What does this upgrade need to do, and how can we implement it?  In our case, those elements had to become side thoughts and new priorities came in to take their place.
 
Since the core of the gameplay was focused around the idea that the player could only “see” sound, it was important to make sure the player realized the correlation between sound and the amount of light generated in the game.  First priority, therefore, went towards finding a good way to synchronize sound and light.  After all, if the player is supposed to be seeing sound, the game’s visual elements need to be based around the game’s audible components.  Otherwise, you just have a bunch of lights that simply happen to be near objects that make noise.  Synchronizing light intensity to the game’s audio doesn’t necessarily guarantee that the player makes that connection, but it goes a long way.
 
Another priority arises from the game’s focus not just on sound, but specifically, music.  Every action within the game that produces sound is supposed to tie together to form a component in a musical track.  This means that, a.) sound effects need to have a musical quality, b.) game sound effects need to synchronize with a master tempo, and c.) on that note, game actions also need to synchronize with a master tempo.  Incidentally, there’s also point d.) do we know anyone who knows anything about composing music?
 
Because this issue clashes with other, more basic game design, art, and programming concerns, it wasn’t able to be resolved all at once.  It’s been popping up again and again throughout the development process.  It’s not until now that all of the different pieces of this issue have begun to come together.  As a result, the feel of the game is finally starting to coalesce.  There’s still a way to go with the music and sound effects, but with the main “musical” systems coming into play, the game is finally starting to achieve the feel envisioned by the core concept.
 
The trade-off for this, of course, has been the time taken away from refining the design and execution of actual game mechanics.  Some systems aren’t yet as clear as they could be, many assets still aren’t in place, and a number of scripts aren’t as clear-cut and efficient as they should be.  But for us, it’s been a question of where to focus our time.  What’s the most important element of the game experience?  Is it the act of playing the game, or the act of living within the game world?  We’ve been jumping back and forth between these two priorities a bit, but ultimately, our preference has clearly been for the latter.
 
Well, actually, right now, our focus isn’t on either of those.  We’ve got a Kickstarter to run.
 
Doing Whatever Works
 
Ideally, none of these issues should clash with each other.  If you’re dealing with a large enough team, you’ll have a squadron of engineers to deal with different coding priorities, a small cadre of game and level designers to make sure the game is laid out correctly, a battalion of artists to pump out assets, and a token force of producers out there winning the hearts and minds of the public, leaving the creative, artistic, and technical directors to maintain the overall mood and vision for the game.  But when you’re stuck with just two full-time developers, it quickly becomes clear just how much prioritizing has to be done.  That’s where you start to realize the importance of those questions about “what makes your game your game.”
 
Even in the larger team settings, these are key questions to be asking right from the beginning.  Knowing the key takeaway of the game helps you to know your priorities, and knowing your priorities helps you to better organize the whole development process, which makes it more and more likely that those priorities will actually be met.  I hate to sound boring about it.
 
The real point, though, is that unless you’re working on a fastidiously-maintained franchise or you’re making the same sort of game over and over again, you can’t really operate under the assumption that the development cycle should be structured the same way for each project.  Whatever milestones you put in place need to be based around what you want the world to see.  What I’d like to do is come up with alternate, more generalized definitions for what entails an Alpha, Beta, or Gold Master, but given the amount of variability in play here, it’s not a simple task.  Here’s the closest I can come:
 
Prototype: Here’s the key selling point of our game.
 
Alpha: Here’s our key selling point, but now placed within a larger context of a game world.
 
Beta: Here’s a more or less complete vision of our main selling point, along with the other key mechanics associated with it.  Here’s how the game world ties together with the core concepts.
 
Gold: Here’s the completed vision for our game world and design concepts.  Our main selling point is now so integrated with the game world that we’ve created a new reality for you to engage in.
 
How your game is built should be based on how you want your game to be presented to the world.  What you should present depends on the type of game you’re creating.  As for what type of game you’re creating, that’s up to you.
 
But this is all coming from some indie developer off in the wastelands of Upstate New York.  Really, what do I know?

2014/04/14

Your Players are Idiots (or: Press A to Jump!)

Seriously, people.  I’m trying to develop a whole game here.  I have better things to do with my time than hold your hand and guide you through the tutorial step by step.  I mean, come on.  You people are so needy.
 
I suppose it’s my fault, really.  As the lead designer/creative director, it was my decision to make.  I can see now why so many games open with sequences in which blocks of text pop up saying “Press ‘A’ to jump!”, “Look around for clues!”,  or “The Magic Wand can be used to cast spells.  Here’s what all of those spells are and what they do…”  I’ve always found that sort of thing kind of annoying in the world of game design.  In a world where we’re all so into immersing our players in the experience of the game, so often, the very first thing players see is a fourth-wall-breaking message from the game explaining how everything works.  Whatever happened to experimentation?  To discovery?  To SCIENCE?! 
 
Working from that mindset, in designing A.V., I wanted to take a more subtle, subconscious approach to explaining the basics of a new game concept to the audience.  This is the tale of how that effort derailed months of potential legitimate progress in the development cycle, and why it was so important for that “legitimate progress” to be put on hold.
 
The concept itself is simple enough: produce sound to generate light.  All you have to do is walk around to figure that part out.  I suppose we could have made that sort of game – the sort where you just walk around – they’re kind of a big deal lately among the artsy crowd.  Unfortunately, our design is based around a bit more than that.  We wanted our game to have goals to reach and challenges to overcome.  You know…gameplay.
 
In a game that’s designed to have clear goals and carefully-crafted puzzles, the worst thing you can hear as a designer or developer (apart from, “My computer appears to have just exploded”) is, “What exactly am I supposed to be doing?”  When that question is asked, the entire venture is pretty well dead in the water.  If the player doesn’t know what to do, they’ll set your dime-a-dozen game aside and move onto something that makes sense.  Well actually, first they’ll go on and check the forums for two seconds, and upon failing to find an answer, post the comment “ZOMG u guys this gaem sux do not buy” and then move on.  However, if you are not the sort of developer who has built a forum, or for that matter, a fan base, you get something far worse: nothing.  No one discusses your game, no curiosity is generated, and an entire branch falls off of the tree that is word-of-mouth.  Effectively, an entire division of your free marketing department resigns before even beginning work.
 
So this is what we’ve been working on for the past several months.  I’m not saying we haven’t done anything else, but we’ve been putting a considerable focus on these first few minutes of our game.  We did not choose this path.  Really, we didn’t.  It’s taken up a huge chunk of development time.
 
Right from the start, A.V. was designed to incorporate a spoken narrative element to provide player instruction and generate an appropriate stylistic context for the feel of the game.  Despite this pre-designed exposition, I wanted to ensure that the basic design of the game could convey enough information for the player to be able to understand the core of the gameplay.  The question is, why bother with this intermediate step?  If verbal monologues were intended, right from the beginning, to explain the game, why bother designing the game to work without them?
 
The answer: players are idiots.
 
As a game designer, you can never expect players to do what you want.  You can never expect that players will open “Door #2” on their own, triggering the scary skeleton man to spring out from the shadows.  You can’t expect that players will know not to jump, run, and crouch against the wall at the same time, triggering the bug that allows them to suddenly jump onto the roof.  Even if you can expect them to know that that action triggers the bug, you can’t expect them to avoid exploiting it.  In the case of narrative, you can’t expect that your players will pay attention to what they’re being told, or that they’ll understand it.  Because of this, you always need a contingency plan…something that slips into the subconscious , built into the fiber of the game, that the player can’t avoid.
 
With video games, though, it’s not just a simple matter of explaining what something on the screen is and what it does.  Since players have control over the world, they need to understand how what appears on the screen relates to them in the real world.  The aesthetics of the world and the nature of the gameplay all tie together with user interface design to provide a description of how the world works inside and outside the magic rectangle with all the pretty moving pictures.  If you don’t want to trouble yourself with immersion, this doesn’t have to be all that difficult.  All you need to do is tell people, directly, that Button A performs Action X and Button B performs Action Y.  If, on the other hand, you want to convey instructions about the real world from inside the context of the game world, things become more complicated.  
 
 
 
This is, arguably, the most important piece of A.V.’s GUI.  It’s a computer mouse.  Once you understand that, you can make sense of how the controls work.  However, there are still two problems faced here:
 
1. If people don’t know this is a computer mouse, it does nothing to help them understand the controls.  Now, you would think that since A.V. is a game for the PC and it requires a mouse to play, people would easily recognize that this icon is a mouse, much like the one sitting under their hand.  Unfortunately, players are idiots, so this can’t be guaranteed.
 
2. “I understand that this is a mouse, and that each symbol on here represents something I can do with the mouse.  But what now?  What do all these symbols mean?”
 
Most of what’s on this icon is meant to be implicit in its direction.  Elements light up or fade out as actions occur, particle effects indicate that something is ready to use, and colors change according to what you’re able to do.  The idea is for you to play along and, out of the corner of your eye, see something change and slowly register the connection between game actions and the images on this icon.  Eventually, you won’t even notice that it’s there most of the time.  Indeed, to me, this all makes perfect sense.  But you are not me.  You are a new player.  You have never seen this game before.  You, my friend, are an idiot.
 
This icon details the status of your Instruments.
 
“But wait,” you might ask, “what are Instruments?”
 
Stop being such an idiot.  They’re the tools you use in the game to influence the world.  Everybody knows that.  And when I say “everybody”, I mean we, the developers.  Both of us.
 
So how do we explain what Instruments are and how you can use them?  Ooh!  I know!  We’ll have faith in the reasoning skills of our players!  We could just give them all of the Instruments right from the start and let them figure everything out.
 
Ha ha!  That’s funny, isn’t it?  We saw how funny that idea was during our first playtest.
 
Well, let’s give them the benefit of the doubt.  We’ll let them run the first playtest without any prompts from us.  Oh, look, everyone’s just sort of running around in circles.  Let’s see what they have to say about their experience!
 
“I don’t know how anything works.”
 
“I don’t know where I am.”
 
“What exactly am I supposed to be doing?”  
 
Okay, so giving people all of their Instruments from the beginning doesn’t do anything.  All they’re doing is shooting pretty lights around.  Maybe we can try giving people their Instruments one at a time!  We’ll give them an Instrument just before it’s time for them to use it.  That way, they’ll be prompted that they have something new, and they’ll be tempted to test it out in the right environment.
 
Ah.  But there’s a problem.  We have this mouse icon on the screen right from the beginning of the game.  It monitors your Instruments, and since you don’t have any Instruments as you start, the icon doesn’t do anything.  People press the right and left mouse buttons, and nothing happens, so they assume the mouse buttons are useless.  Then, when they finally get an Instrument, they don’t know how to use it.  
 
“I mean, I already tried the mouse buttons, and they didn’t do anything, so that icon must mean something else.”
 
So, okay.  Let’s try this again.  We’ll have the mouse icon off the screen.  It will only appear when you collect your first Instrument.
 
Now we’re on to something.  The icon pops up and moves into position.  Now, people assume it has something to do with that thing they just picked up.  That’s close enough.
 
But that’s not the only problem.  People are having trouble navigating in the world.  The world is very dark, and that’s by design.  But people are standing against walls, staring at the floor, and getting stuck behind boxes.  People say they need more light.
 
“I cannot teach him.  The boy has no patience.”
 
Well, forgetting the fact that they’re completely missing the point of the game, we can also take note of the fact that people aren’t making use of the Ping.  I mean, for God’s sake, it’s not like it’s hard.  Just press “E”, you idiots!
 
Then again, come to think of it, we don’t have any mention anywhere in the game that the Ping can be used to light up the world, or that the Ping can be used by pressing “E”, or, for that matter, that something called the “Ping” even exists.
 
Alright, you know what?  I give up.  We’ll put a giant letter “E” in the corner of the screen.  When you use your Ping, the E will fade out and get out of your way.  That should do it.
 
 
But, as it turns out, in addition to being idiots, players are apparently also completely blind.
 
How can you not see it?  There’s a giant, glowing letter E up in the corner of the screen.  What do you think it could possibly mean?  Okay, then, how about this…maybe people don’t notice it because it’s stationary.  We’ll make that E symbol pulse blatantly.  That should…
 
…No, you know what?  I’m not falling for it this time.  Let’s just make that E symbol into an image of a computer key.  We’ll animate it, for good measure, so people can see an “E” key being pressed.  And never mind the corner of the screen.  We’ll make it huge and put it right in the middle of the screen so there’s no possible way they can miss it.  If they can’t figure out what to do with that information, I don’t think we can help them.
 
 
Our players have used the Ping reliably ever since.  Finally, they’ve shown some signs of hope.
 
But people are still taking far too long to exit the first room.  They don’t get any Instruments until they do, so they’re effectively just running around trying to access areas they can’t reach, firing Pings into the wall since, really, that’s all they can do.  We need some lure to demonstrate that there’s more to the game than wandering around looking at this one room, although, as I said before, that sort of thing is popular in indie circles these days.
 
So our players are still idiots since they, apparently, can’t locate and walk through an open doorway that offers no obstacles.  Maybe that’s the problem.  It’s too indistinct.  There are some other elements in the world moving around and generating sound and light.  The doorway out of the room is just…a hole.  But at least that’s progress.  At least we’re changing the design of the world, now, and not just the base interface structure of the game.
 
What to do?  Well, people like collecting things.  Let’s make the player’s Instruments into physical objects.  They can generate sound, light, and seizures, just like the rest of the game.  Never mind the fact that the game is meant to be nonlinear.  Our players need to learn this stuff, so we’ll guide them from place to place until they figure out how it all works.  People want to go after the lights, so let’s give them a nice, low-hanging fruit.
 
 
“Yes.  I am good.  I am a friend.  Look at how I dance.  Come to me, my child.”
 
And, you know what?  We’ll also put in these conduits.  Energy flows through them, generating light that literally points you from one place to the next.  Follow it.  JUST FOLLOW IT.
 
And thus is the adventure, so far, of creating a tutorial for a game people aren’t used to.  It’s a quest that has caused us to deviate from our primary goals for months, all because we don’t want to directly tell our players, “Press A to Jump!”
 
We’ve since had to relent on even more of the implicit instruction system.  We’ve added a splash screen briefing players on the default controls before the game starts.  We’ve added an interactive help menu to explain every element in the GUI.  And even after all of this, when we finally started adding in the character monologues, people still expressed that they were a big help in understanding game elements.  The real irony is that our opening cutscene prominently features the line, “As they say…show, don’t tell.”  We’ve been “showing” for months.  It’s only now, once we’ve started telling, that things have started to make sense.
 
As disheartening as it is from a design standpoint to discover that our game hasn’t really been able to be explained without explicit instruction, the important thing is that it is, at long last, looking as though it’s starting to make some sense to people.  This means we can finally move on and worry about cleaning up the rest of the world design.
 
So, hopefully, you’ve begun to see my point.  We, the developers, have been looking at and thinking about this game nearly every day for the past eight months.  We’ve played through our own tutorial dozens of times.  We’ve built all of our Instruments, and have dictated exactly how they work.  You haven’t done any of that.  That makes you all idiots.
 
And yet, we place more faith in your judgment than we do in our own…precisely because you’re the idiots.  Because we’re not trying to make a game for us to play.  We’re trying to make a game for you.  Thank you for reminding us of that before it was too late.  Our players may be idiots, but to be honest, we’ve been the stupid ones in this relationship.

2014/04/03

Computers with Brains?!

Hello and welcome to the AV development blog.  I'd like to start off by thanking you for taking the time to read through this (assuming that you didn't just wind up here by mistake).  My name is Brockton Roth and I'm the AI developer for AV.  I started working on the project in October of 2013.  Today, I will attempt to cover some of the aspects of the AI development I have been doing.

A large part of my work has been focused on the enemies, and the AI that surrounds them.  While game AI is not necessarily limited to just enemy behavior, in AV that is its primary use.  Enemies are broken down into several groups.  Detectors are in charge of raising an alarm when the player is spotted.  Decompilers are in charge of attacking the player.  Reinforcers are in charge of guarding areas and preventing the player from getting near.  Trackers operate on their own and attempt to hunt down the player.  While there are many enemies to come, the currently existing enemies in the game are:

Detectors

CAMERON - A simple camera unit mounted on a tripod.  CAMERON can only look in one direction, but may rotate around the Y (vertical) axis to make up for this limitation.  When CAMERON sees a player, it raises an alarm that alerts nearby enemies of the player's location.  Usually, only Decompiler enemy types will respond to a raised alarm, as they are in charge of attacking the player in most cases.

Decompilers

JENNY - A fairly common drone unit with a pair of blasters used to fire projectiles at the player.  As a generic attacking unit, JENNY is placed all around the map to guard positions and patrol on paths.  On top of this, JENNY can detect a player as CAMERON might, and alert nearby enemies of the player's position.  Upon reaching a location where an alarm was raised, if JENNY cannot find the player she will attempt to look around for the player (not currently implemented).  If the player cannot be found, JENNY returns to what she was previous doing.

VLAD - VLAD has most of the behavior of the JENNY unit, however in order to attack VLAD uses a mounted spear.  The ram attack is much more powerful than JENNY's blaster attack, but often easier to dodge.  That being said, VLAD will not usually attack in the same manner as a JENNY, and focus on surrounding a player and moving in from angles that aren't necessarily expected.  JENNY, on the other hand, will group up and attack the player right away.  This can make VLAD a tougher opponent to go up against.

Reinforcers

RYAN - A large riot shield attached to a ground-based drone.  RYAN's purpose is to knock players far back, dealing some damage and often pushing them off platforms or smashing them into a wall (an often fatal encounter).  Generally speaking, RYAN sits in one position and waits for the player to cross his path.  When he does see the player, he rushes forward and attempts to ram the player, then returns to his previous position.  RYAN cannot be harmed via an instrument to his riot shield, and thus can only be hit on the drone part which makes it only truly possible to hit him from the side or behind.

TERRY - A mostly stationary turret, TERRY operates much like CAMERON and constantly sweeps the visible area to search for the player.  However, when TERRY see's a player he not only sets off an alarm but begins firing at the player much like a JENNY would.

----------------------------

As of right now, these are the only enemies in the game, but there are several more to come.  To learn more about this, please check out: http://avgame.wikidot.com/enemies

From a developing standpoint, there were several challenges I encountered to make these enemies actually work.  Largely, most of the work is done in an EnemyBrain script which loops through all enemies and handles the updates for all of them, instead of having each enemy have their own update function.  The EnemyBrain script goes through and determines which enemies are too far away, causing those enemies to not be rendered or updated at all so that they do not use up unwanted resources.  For enemies that are within range, it accesses an EnemyAI script located on each enemy to pull information about that enemy.  It can then use the funcitons within the EnemyAI script to tell that enemy how to behave, based on what the EnemyBrain determines is needed.  As an example, an Attack() function exists within EnemyAI, which (when called) causes the enemy to begin (or continue, if it has already begun) the attack behavior.  There is also an enemyState that determines what the enemy is currently doing, and an enemyType that determines what type of enemy we're dealing with (VLAD, TERRY, JENNY, etc.).

One big challenge I faced was getting the enemy sight to work properly.  As such, each enemy has an EnemySight script which controls how it sees.  This has a fieldOfViewAngle that determines how wide the enemy can see.  It also has a playerInSight boolean which determines whether or not the enemy can see the player.  Then there's a Vector3 personalLastSighting which is where the enemy last saw the player, and it gets compared to a global LastPlayerSighting variable which is where the most recent sighting of the player (by any of the enemies) occurred.  Using this system, the alarm is able to be raised by any of the enemies and the location of the player can be easily shared between them.  The last step in this was actually determining whether or not an enemy can see the player.  For now, this is done with a SphereCollider, but the intention is to move away from this and just do a distance check.  Once the player is within the SphereCollider, we check to see if the player is within the enemy's fieldOfViewAngle.  If so, we do a Raycast from the enemy to the player, to see if anything is between the two of them.  If the ray hits the player, then the player is in sight and we set playerInSight = true.  Otherwise, there is some object or wall between the enemy and the player and the player is not in sight (playerInSight defaults to false, so we don't need to explicitly set it).

A large portion of my recent efforts have been in optimization, as AI can quickly become incredibly taxing on computer resources.  This is why, I imagine, the game industry is moving toward putting AI onto the GPU, instead of CPU.  This is why I created the EnemyBrain class, as initially all functionality was handled within the updates of the EnemyAI class, which meant every single enemy had a FixedUpdate() function that ran, regardless of where that enemy was or whether or not it was even affecting the game.  Now those enemies don't even get rendered unless they have to be.  I also am aware that triggered colliders in Unity can take up a lot of resources, and as such I have been slowly deprecating them out for more efficient methods.

A current task I'm working on is better pathfinding.  We've mostly got the nav meshes where we want them, but as changes to the scene occur, changes to the nav mesh need to be made as well.  However, I don't have a good pathfinding mechanism implemented, and it can often cause strange movement behavior on the part of the enemies.  I intend to implement a form of A* to control how an enemy gets from point A to point B.  I can then use this to implement a Search() function into the EnemyAI script which can then be called by EnemyBrain to tell enemies to search an area.

That's all I have for today, thanks for reading up to this point!  I'll check in again soon to share more about the AI development.

- Brockton Roth

2014/03/06

Triggers and Plugins are Fun, mmmk?

Today I am going to talk a little about our engine and how we accomplish interactions and saving / loading. Under the hood, games need some way for objects to interact with each other. Players need to be able to attack enemies, items need to be able to activate other items, etc. The player also needs to be able to save / load their game (unless you are developing for a NES era system). The way we accomplish interactions is with triggers, more specifically a CatchTrigger script and an ActivatableTrigger script. Saving / Loading is done with a free plugin which greatly reduced development time on creating a save / load script from scratch.

Anything that needs to activate something else gets a ActivatableTrigger script added to it. Anything that needs to be acted upon gets a CatchTrigger. Objects can even receive both scripts if they need to be able to act upon other objects and also be acted upon themselves.

The player for example will receive both Catch and Activate trigger scripts. They need to be able to act upon objects in the world like pressure sensitive buttons and also be acted upon by other objects like weapons or crushers. On the other hand the player's projectile will only receive the activate trigger. Creating these scripts and the way they interact was not very difficult thanks to the component based unity engine. Anything with a collider can have a script with a "OnTriggerEnter" function, which executes when one collider hits another collider. Simply add the "OnTriggerEnter"

to the Catch / Activate Trigger script and handle all the interactions within this function. Then you just need to add the scripts to any objects / new objects in the scene. Next, we have the save / load plugin. At first, I was going to create a save / load script from scratch. However before I started I looked around and found a great free plugin that is available to anyone and is even free for commercial use. It is called the Unity Serealizer and is available at http://whydoidoit.com/unityserializer/ and on the unity asset store. As soon as you get it installed (which turned out to be more of a pain in the butt than it should have been thanks to the asset store bugging out) it is super simple to get working with your game.

Once you get it installed, you can select the Serealizer Wizard from the Window menu. This is where you use the main functionality of this plugin. First, using the wizard, you select a object which you want to store the SaveGameManager. This is the main script for the UnitySerealizer and handles all the actual saving / loading of your game. For our project we have a "Global Scripts" gameobject which holds all scripts which should always be running. The pause script which handles the pause menu for example is stored here. After you select a object to store the manager you can click on objects in your hierarchy and chose whether you want to store the object or the object and all of it's children, all from the serealizer wizard. Most of the time you will want to store the children with the object, what is the point of a player mesh if you don't have any of the scripts attached to it?

The serealizer also comes with a custom pause script, which lets you pause, save the game and load from whatever save game you want. You won't always want the player to be able to save / load whenever they want (we definitely don't in our game) so this will serve as a good starting point and a way to understand how saving / loading work in this plugin. Plugins can be a great way to add functionality and features to your game without much time. However, sometimes they are more complex than needed and just add extra work. They are also not always free and especially not usually free for commercial use. This was a great exception because it is actually something the author of the plugin is trying to get added to unity by default because he thinks it is a core feature to games. It also could have been much more complicated than it is, but it turned out to be very easy to install and start using.

Make things easier when you can, but look down the road and make sure it will not be harder in the future.

2014/02/23

2D Aspects and Site Changes!

Welcome back to the Development blog, or to any of our new visitors, welcome! Last week, we have heard from some of the 3D artists on the team about their progress on the game. Now, we're going to switch from three dimensions to two dimensions for this week's blog update! My name is Amanda Rivet, and I am a 2D artist for the team. (And yes, a fellow minion.) So far in my adventures through this semester, I have helped shape the user interface, create the main menu and amp up game's official website.

As you can probably see from visiting our website, our site has gone through a lot of visual changes. When I was first assigned the project of revamping the website, I was really inspired by the original banner's look.


I really loved the look of the circuits in red, green and blue. So, with that concept in mind, I decided to focus on those colors for my color scheme. I threw the banner into Illustrator and did several versions of the circuits in those colors. I focused on making each color a certain section of the page. The blue and green would be right and left bars for the page and red would be in the center. I tried to make it as minimal as possible so that the backgrounds were not distracting from the general content of the page. I even revamped the current logo at the time for something very similar to the page's design.


Now the page is really clean, colorful and fits the theme of the game really well.

Another change to A.V. so far is the application of the main menu! With a unity skeleton provided by Brock, I was able to work on setting up a main menu. I decided to connect the game and the webpage more by implementing the circuits into the actual menu itself.


With Preston's help, we gave the menu an interesting animated feel by using two colored spotlights moving up and down the green and red circuits. Underneath the A.V. logo, I placed some blue circuits as well. The font used is called Fluoride, and a freeware font created by Ray Larabie.

Finally, for the user interface, I was in charge of changing the instrument icons to make each of the purpose of each instrument more clear to the audience and keep the feel of A.V. These are the icons before the update:

(Left to Right: Elevate, Accelerate, Activate, Deactivate, Freezer and Scrambler)

And these are the icons now, listed in the same order as the previous icons:

One of my goals for these icons was to incorporate the player into the feel for each instrument. As you can see in Elevate, A.V. is jumping. For Accelerate, A.V. is running. With the use of recognizable icons, players can now identify the actions of those instruments. For activate and Deactivate, I did not want to change too much from what we had previously. I incorporated a vector for the pylons you have to activate/deactivate throughout the game, and simply kept the same concept attached to it. Freeze was a little harder to portray but after a few new concepts, I stuck with a very simple, yet effective icon that portrays the ability to stop motion. Finally, for Confuse, I really wanted to implement something that looked very similar to the enemy. The problem with the previous icon was how abstract it was to the audience. People couldn't tell if it was a help icon, meant to confuse you, or confuse the enemies. Ironically, the icon was confusing! I traced a vector over an image of one of the enemies that are already implemented into the game and placed several question marks to clear things up.

That's it so far on the two dimensional end! My next project is setting up the narrative animations for the game. Stay tuned for more development updates from A.V.!

--Amanda Rivet

2014/02/17

Visual Pipeline. Assets! Assets! Assets!

First off I’d like to welcome any return visitors and especially new visitors to the AV site! Today I’m here to talk a little bit about the visual pipeline of the game AV. Our pipeline is very straight forward from start to finish. Currently we are using Autodesk’s Maya 2014 and Adobe’s Photoshop CS6 as our main programs. AV’s art style is themed towards what you might find inside the a computer both physically and virtually. This art style allowed us quite a bit of freedom, when it came to the first step of our visual pipeline concept development. During this stage we typically start out by gathering reference images and concepting our assets in 2D first using Photoshop. Then once we are happy that the look and feel fits the overall theme of the game. We move on to the fun part of the visual pipeline the modeling stage! During this stage there a couple important factors that we take into account. The first is scale, scale is important because it’s much easier from a level designer perspective if the assets all come into Unity the same size, which saves time when piecing the level together. The second factor was polycount, the designers of the AV, wanted to keep the game as low poly as possible. So what that meant for us as asset developers is that we had to be careful with our edge loops and extrudes and really plan out our assets so that we weren’t wasting large amounts of time having to clean-up our meshes later down the pipeline. Once the modeling stage was done, it came time for the probably the largest part of our pipeline, the texturing stage. This stage involves uv’ing and then exporting the uv’s to Photoshop so we could then paint over the uv’s to match AV’s art style of the “glowing outlines.” This is the most time consuming stage of the our pipeline because our uvs needed to very accurate and precise this allowed us to follow the uv’s as a guide for painting the “glowing outlines” to match the 3d asset in game. Typically once an asset makes it past the texturing stage, we do some final “cleanup” before export. This “cleanup” stage involves deleting polygon faces that are not needed, this is done on a per asset basis to allow us to make sure each asset is the lowest poly count it can be. Other important “cleanup” tasks included were center pivots, freeze transformations, and deleting build histories. The final part of the “cleanup” stage that each asset goes through is having a matching collision mesh grouped to the asset, allowing for easy drop-in playability in Unity. In conclusion working on AV has been a great experience in game asset development for Unity and I look forward to everyone playing the completed version.    


-Nik


This week has been one of the busiest I've had since I've been at RIT. I have been on the verge of being overwhelmed by the numerous deadlines and time constraints. The toughest part of managing projects and encroaching deadlines in a variety of different computer graphics related and design foundation courses, is getting my mind moving in the direction that it needs to. I'm aware of the time-frame and the overall progress that must be made, but I spin my wheels trying to make progress in too many different directions. The tasks required to achieve that progress begin to pile up and get lost in the ambiguity of when should I do this and for how long. I am very interested in the subject matter of each one of my classes and I believe that taking a full course-load with such and emphasis on design, project management, conceptualization, and time management is incredibly beneficial in making the connections that will help take my prospects as an employee to the next level.


However because of having so many classes, I have to carefully manage how much time is being spent on any given task or project, even when I would like to spend all day working on it to truly get my mind and efforts proceeding in the right direction. All that being said, this week I worked on modeling, uving, texturing, and exporting assets from Maya game-ready for Unity. The point I made earlier comes into play pretty early as I often spend a decent amount of time fighting myself over whether or not a model is finished. This leads to the first big hurdle in getting started with any project and one that continues to crop up as I try to balance what I want to accomplish as an artist/designer, versus what is being required of me as a student. My desire to focus my attention on the model, can start to blur the others steps that are required and the ambiguity of what to do next can really cripple an otherwise productive period of work.
Each project and every step along the way is an opportunity to learn or put something to practice, a major factor is a digital artists work is their ability to use the tools at their disposal with efficiency, and to progress through a workflow maintaining as much artistic control over the resulting output as possible. Doing the Zone 1 assets was no different and I am glad this work forced me to UV rather accurately for a seemingly simple visual style. My efficiency and understanding of the goals of the uv workflow,especially in relation to managing time vs the artistic goals that the UVs are helping to achieve greatly improved thanks to much of my time being spent on those two aspects
. Also, as this is the first time I have done 3D work for someone other than myself, I was met with another challenge of not just comprising with my personal voice in order to move forward, but also with a secondary internal voice that was much more focused on ensuring that the result of my work would meet the expectations of the team leaders.


Essentially this week can be summarized as my first experience as a 3D artist in which my work needed to meet the criteria for being 'game-ready'  and meet the approval of those who will decide whether that asset ends up in the finished product or not.
Im ready for bed.


-Hunter