» Posts

Indie games II – Tooling

This post is the first part of the talk I gave at IES Alfons III and belongs to this root post.

So, how can we actually build a video game? We meet two subjects at this point. The first one concerns mechanisms, meaning, the technologies available for game development. The second one is more about business, or how to make a game appealing for the player.

Today we’ll learn about the former.



Due to technological advances, there are several ways to create a video game. As we’ll see, there’s no imperative need for programming skills, even not drawing skills. The only vital element at this moment is game design. But we will leave that for the next post.

Today we will focus on the most popular tools, which are mods, game engines, and game frameworks.



A mod is the alteration of the original content from a video game. The alteration may consist in small modifications, or may result in an aside new game.

The main advantage of mods is that no programming skills are required, so they’re quite accessible to everyone. Another good point is that you are allowed to use the original game’s assets, like graphics, music, maps, skills…

In the example above, user can select a hero from the toolbox on the right and drop him into the map, and just start using it, because he borns fully operative.

Altogether, your work will look professional immediately, releasing you from technical issues and allowing you to focus on the particularity you prefer. For instance, a novelist may concentrate on the lore, while a designer may choose map modeling or gameplay.

People usually start modding a game when there’s something they dislike or they want to improve. The most elemental case is physical modification, like creating custom hair styles for the characters.

Familiarized players tend to recognize poor or faulty statistics balance, which leads to point management and skills modifiers.

The most complex grade of game modification is the creation of a fully new game. It’s funny to imagine that these new games born as small modifications that grew exponentially and went out of control.

Some companies provide mods to customize their own games, namely:

  • Construction Set – Elder Scrolls see
  • World Editor – Warcraft see
  • Obsidian and Aurora toolset – Neverwinter see
  • Valve Hammer Editor (Source) – Half Live see

The most popular mod is Counter Strike, which is a mod for Half-Life. Games like Skyrim and Minecraft have as well devoted mod architects. You can find a nice mod catalog here.


Game engine

Game engines are tools that allow an easy but extensible game development. They provide game’s core functionalities, like rendering, physics engine and AI, all enclosed inside a GUI assistant.

So the best of them is that you can manipulate complex game utilities with drag and drop. Most engines supply code editors as well, but they are often hidden in some degree, depending on the user’s expected programming capability.

They grant a high customization level because the user can combine features in any way, without the game-gender bounds a mod establishes. Also by their accessible GUI but powerful tools,  they are useful for both profane and versed programmers.

Some very popular games were made using game engines, like Unreal and Bioshock Infinite (Unreal Engine); Assassin’s Creed, Wasteland 2, Hearthstone (Unity) and Hyper Light Drifter (GameMaker).

The later, GameMaker, is also the one that we use at CoderDojos, because it is handy enough for 12-years-old children to build a game in 2 hours. Here’s a picture of our  proud game dev team ^^:


From scratch

So we’ve discovered wonderful tools that were created to ease our job. But if you like coding and you are not afraid of getting your hands dirty, then this chapter is for you.ninja

Video games can also be created from scratch, using a programming language. You may rely on third-part libraries for game features like physics and rendering, but not imperatively.

Basically, you are unbounded. Your game will be customized as far as you want. There are no limits nor rules except for the ones you establish. This means that you could concentrate on low level components (like performance), or set up your own code style, but also you can make characters use rude language.

And of course, you are the owner of your creation and hold the intellectual property.

For game coding you may use an existing game framework, some of the trendiest are LibGDX, XNA, DirectX, cocos2D, Godot, Phaser… there’s a list somewhere. You are free to develop your own framework as well…

But first, think about what language you are going to use, and select the one that suits you more depending on how comfortable you find it, its performance, its license, its portability…

I personally like using Java with LibGDX.(actually, this blog is an external LibGDX tutorial) The main reason is that I really enjoy working with Java for learning because it is a very restrictive language that prevents newbies like me from using bad practices. On the other hand, Java can be debugged and specially I find IntelliJ IDEA quite friendly.

Many notorious games have been developed from scratch. Minecraft was made in Java using LWJGL. Fez and Terraria used XNA (C#). And finally, the guys of Super Meat Boy built their own framework for C++ game development.



As you can see, there are many good options available for us. It’s important to know these tools in order to choose which one suits us better. But I don’t think the success of a game will depend on this. In the next post we’ll be learning about game design and business, which is far more relevant.


Indie games I – History

This post is the first part of the talk I gave at IES Alfons III and belongs to this root post.

To understand what indie games are, and how they fit in the scene, we must first build a context. History of video games started not so much ago, but they suffered such a quick evolution in few decades.

1962 – First game

The first video game was created in the same place where all relevant things happen: In a laboratory. Here are some nerds playing with their new experiment and taking notes of the results:

“uhm, uhm, this looks so relevant for our interest”. This game was called Space War.

computer_spaceOne of the relatives of the project, Nolan Bushnell (remember the name), decided to install that game in a portable “box” and expose it in recreational parks, allowing people to try it.

Unfortunately, this experiment didn’t work. The control panel was too much complex and, as you will learn later, when the player doesn’t understand how the game controls work, he gets frustrated and stops playing.

Having learned this lesson, Bushnell created Pong. It was a very simple game for two players in which each player hits a ball back and forth across the screen using a “racket”. The racket was moved using a dial knob. Intuitive and accessible:

Screen Shot 2016-12-13 at 19.17.47Unlike Space War, Pong did get a big success, and gave Bushnell enough money to found Atari.


70′s – first home consoles

During the 70′s, everyone was hooked on video games, and people started to request domestic systems. In 1972 Magnavox launched the first home console: the Odyssey.

Odyssey was very rudimentary. It provided plastic overlays that users stuck to the TV screen to create backgrounds, and then, up to three dots were displayed in the screen. Each game assigned different behaviors to the dots. All together, the system simulated a game environment.

Here’s a modern family enjoying a wonderful afternoon, playing Pong on their Odyssey:

Atari also launched its console, the Atari VCS. Later, other companies launched their own consoles and there’s a history branch for that, but let’s move on.


1978 – Japans happens

Video games industry had been growing in Japan since the beginning and, from the day they put a foot in America, they totally outstood.

japans happensIn 1978, when everyone was hyped with sci-fi due to Star Wars and Star Trek, Tomohiro Nishikado developed Space Invaders, a game about killing aliens with your laser canon. Obviously, the game had a great welcome among the Americans (and around the world).

Another significant hit of Nipponese was Pacman, where the player has to eat all the dots within a maze while avoiding the ghosts that prowl around. This product achieved a great success because it introduced a new game genre, original and different from the space shooters that where hoarding the arcade lounges.


1983 – Atari burial

Now let me tell you a sad story. Some years before, Atari found itself unable to produce its console Atari 2600 for such a massive number of customers. Thus, Bushnell decided to sell the company to Warner Communications.

As the evil corporation it was, Warner switched the company’s philosophy from ‘craftmanship’ to ‘profitable’.

Their first step was to prevent programmers from signing the games they produced. This was quite painful for the programmers because, at that time, games were entirely designed and developed by one or two persons. Imagine you create Pacman or something similar at your own, and then you cannot feature as the developer of that game, and besides you have to use the name of a corporation that is oppressing you.

As a consequence, some of the best programmers left Atari and founded Activision.

At the same time, Warner bought the rights to Pacman and E.T., and marked the Christmas campaign as the deadline to recover the investment. They pressed the remaining developers to finish games about those characters in less than 3 months. Those programmers were great professionals, but working demotivated and in a hurry, they produced awful games.

Warner produced millions of copies of those because it took for granted that players would blindly accept any game with the Atari logo. For Christmas, as Warner expected, people bought those games in bulk because it was an Atari product. But the day after, thousands of dissatisfied consumers returned their copies back. Warner didn’t even have enough place in its warehouse to store all the unsold and rejected copies.

And this is what leaded to the Atari game burial. Warner secretly dug a hole in the New Mexico desert and throw all those copies inside and buried them.

This is not just a tale (actually, they recently unearthed them). This did happen. As a programmer, this story triggers conflicting feelings on me.

In the one hand, I’m sad about the way Warner treated its programmers and I really understand their frustration because I’ve been treated that way. And I can empathyze with Bushnell seeing how the production of the company he founded with hope was buried in the deep.

But in the other hand, it’s satisfying to imagine Warner hiding their disgrace in the deep, and feeling too much ashamed of themselves to let people know about what they’ve done. It’s like killing a ganker in a pvp game, it brings pleasure and joy to everyone.

atari burial

Late 70′s – First personal computers

Consoles are systems optimized for gaming, but this is their only ability. Computers, instead, were back then enormous and complex machines, aimed for expensive calculations. But that changed between the end of the 70′s and the early 80′s, when Apple, Amstrad, CBM and Sinclair among others, presented their new setups (click on the links to see them).

Those systems were handy and full-dressed, providing keyboard, screen and floppy drive as a whole. They were able to accomplish ordinary routines, but most important, they supported games.

The user interacted with the system writing orders in the command line. As his programing skills increased, he would develop more detailed and useful tools, and use a floppy disk to save his code and distribute his work. This also applied for games.

Computer magazines like Byte regularly published open game source codes (known as type-in games) for readers to manually copy them in their machines. Advanced users would be able to improve that code or even create a completely new game from scratch. This eventuality causes the emergence of a new concept: User-generated content.

The coolest part is that this new content could be saved in floppy disks and disseminated among the community, which, at the same time, is modified and generates new content.


80′s, 90′s – Industry boost

During the last decades of the century, video games industry established its place in the market. As the goal of this set of post is not games history, but indie games, let’s just sketch out these decades and move on.

The 80′s were the golden age, with innovative genres (RPGs, simulators), new hardware (NES), and the more and more fans. You may feel disappointed because I’m passing these ages so quick, knowing they were essential in video games history. I apology. Maybe I’ll write about them in the future.

The most relevant of the 90′s was the increasing awareness of violence in games, due to works like Mortal Kombat and Doom. This controversy forced companies to label their games by age range and content, fulfilling the ESRB standard.


00′s – Industry laziness

During the 00′s, game franchises got settled up, and series like CoD, GTA, Final Fantasy and FIFA became the flagship of monopolies. Some of these are excellent games, as we will see later, they accomplish key points like game-play, history and graphics. But release after release, behind the technical improvements, there’s nothing new. The game is similar for the entire franchise.

And here is where indie developers get their chance. Adhering to their business strategy, huge corporations never gamble on innovation. They embrace the franchises they created or bought, and exploit them, generating a void in the market that claims for creativity.



So now we’ve learned some lessons from the past, and it’s time to get into the matter. Indie game developers may not be able to defeat huge companies brandishing technical resources, because they are barely financed. But they certainly highlight by their talent, crafting original and imaginative content, recovering that old-school creativity that has been lost in time. This is what makes the difference.

But you will have to wait for my next post to learn the tale…

Screen Shot 2016-12-22 at 14.47.18




Indie games

Last week I gave a talk about indie games at IES Alfons III in Vall d’Alba. One of its teachers saw my previous talk, where I was speaking about AI. He thought it could be interesting for students to learn about video game development, but he was not sure if they would understand a technical lesson. So we spoke on the phone and arranged a more suitable talk, focused on explaining video games creation and motivating students to choose a related career.

After that phone call I made a little draft about the topics that should be treated and then I brought that draft to a Hacknight. Some people joined and helped me with the construction of a Trello where four block of subjects were identified:

  • History of videogames. read
  • Definition of indie (in games). read

Then, I developed the content for each point. I got the information from documentaries, gaming web-sites and wikipedia, but the most valuable content came from the anecdotes people know and told me. The history of video games and the indie development is full of anecdotes, and those became, no joke, the clue to connect with my young audience and to keep their attention.

After the research, I designed my Prezi to be visual and funny. You can see it here, but it doesn’t make much sense if it is not combined with the speech. So I recommend you to read the related posts while watching the slides.

You should start reading here, or click in the links above to read the related posts. The video of the talk will be available asap (only in spanish).


Refactoring: A quick overview

Hi! As promised, here I am again with another “basic concepts for coding lovers” thing. Before proceeding, I’d like to say thank you to all the people that read my last post and gave me feedback, because you help me very much during my learning process. :)

Well, after this short but certainly emotive declaration, we’re getting into the matter. Today we’ll learn about refactor. The knowledge for this post proceeds from this book, this site and this site, then I applied my own experience to filter relevant data and make a very basic and summarized guide. Don’t expect to find here all the code smells or refactor techniques that exist, because it is not the purpose of this post.

If you really want to learn how to refactor, you should absolutely read “Refactoring: Improving the Design of Existing Code”, by Martin Fowler (and follow him everywhere). Also I’d like to make some altruistic advertising for these guys, who are regularly making courses and dojos to spread the word of divinity code, amen.


Why to refactor

“If it ain’t broke, don’t fix it”. The program may not be broken, but it does hurt. (M.F.)

I’ve heard this words so many times, and they’ve always upset me. I think they’re the most irritating words along with “It has always been done this way”. So what.

When a code is dirty, it becomes a nightmare for all the programmers working with it, because it is difficult to understand, adverse to changes, and hides bugs like hell.

Clean code, instead, is made to be read by humans and to help them doing their work, either applying changes or finding bugs. A good code is also reusable.

Refactoring is like arranging a chaotic garage. Imagine it, so full of paint buckets, broken toys and scrap, that you can’t even park your car in it. So you roll up your sleeves and start hanging the tools on the wall and getting rid of unused stuff until everything is clean and tidy.

Now you can use your garage and the objects that are on it, you can quickly find the hammer in the wall and the hose in the storage boxes. Besides, you got empty space to add new features, like parking a car or setting up a worbench. You even recovered forgotten toys so you won’t need to buy new ones for your kids.

And most important, now everyone, including you, loves that garage and enjoys working on it, and will probably make an effort to keep it clean because a rag thrown in the middle of your tidy garage would be immediately spotted.

So yeah, a chaotic garage may still be working as a garage, but eveyone prefers your zen garage.


There are some tips to apply when refactoring.

Code must work

First of all, code must work correctly before starting the process. Otherwise you won’t know if a failure is due to a known bug or it is a side effect of your refactor.

First refactor, then add

This rule is explained by the metaphor of two hats, by Kent Beck:

Whenever you are programming, you can wear one of these two hats:
The add hat allows you to add new functionalities and to write tests.
The refactoring hat is dedicated to restructure the code and run the tests.


You can only wear one hat at a time, and you should always be conscious of what hat you are wearing at the moment. Meaning, do not start refactoring if you have not finished adding new logic, and do not stop refactoring whenever you think of a new feature. Just write it down and finish the refactor, then come back to it.

Be always supported by tests

You should create a test suite with which you feel enought comfortable and confident to proceed with a refactor. The tests must be written while wearing the add hat, and ideally generated with TDD, which is a design technique I would like to write about in the future. (Meanwhile you can read this).

Your tests must cover all the parts of the code that will be affected (and potentially broken) by your refactor, either directly or collaterally. Once you have them, start the refactor and throw all the tests each time you take a step. This way you monitor the changes on the code and immediately know if your last change broke your code or not.

Baby steps

Don’t make huge changes at once. Try to go little by little, making steps as small as possible, and always throwing tests after each step. This will minimize the area affected by your last change and it will become easier for you to spot the bug in case tests fail.

This does not mean that you can’t make big changes to your code. It means that big changes must be splitted into small changes and be regularly tested during the refactoring process.


Bad smells

If it stinks, change it (M. F.)

Code smells are symptoms that makes us suspect that our code needs a refactor treatment. So what we’re about to do is learn how to recognize code smells, and later we will speak about how to deal with them.

Speaking with a colleague recently, I was asked if a code smell is a “problem” on the code. Well, I wouldn’t say it is exactly a problem, but more like a warning. Sometimes we even introduce them on purpose to highlight something. So don’t start mass killing bad smells: you have to be reflexive and proceed wisely.

Long method / Large class

A very long method or a large class is probably taking  way more responsibilities than it should, but the worst thing about them is that they are completely unmanageable.

There are several opinions about how long should a method be  to be considered “too long”. Usually, 10 lines or the space that you can see in your IDE without scrolling.

For classes, it’s hard to say. Probably that file with 500 lines of code is stinking like a rotten fish.

No shortcuts: The best way to know if a method or a class is too big, is to inspect it and determine if it is rightly in charge of all those functions. After doing this, check the code itself, because it may contain several smells that are making it so long, like duplicated or unused code.

Data clumps

Those are groups of fields that are related and dedicated to the same function. You’d possibly recognize them because of their names:

This set of fields is obviously dedicated to manage the race of the kitten.

Sometimes you could find data clumps preceded by comments or separated somehow from the rest of the code, which should make your spider sense tingle:


Switch statements

Not only switches, also eternal if/elses and similar structures are considered smells, because this is not how object-orientation works. We’ll see an example later.

Refused bequest

This happens when a subclass does not use all the methods inherited from its father. It is a symptom of that either the class is incorrectly being considered a subtype of that superclass, or the superclass is hanging methods that would better belong to another interface:


Shotgun surgery

When making a single change requires that you perform many small changes to many different classes. This could mean that your classes are coupled to each other.



A code that needs a lot of explanatory comments is therefore an incomprehensible code. And, as we said before, the code must be written for human readers. Ideally, it should be read as prose.  So instead of:

Prefer this:

Zero comments, 100% understandable.

Duplicated code

If you need a functionality again, reuse it. Please stop copypasting.

Sometimes you may duplicate code on purpose because you are not sure where should that code be encapsulated, or because you really don’t know how will it look like when you finish the feature. In that case, wait until it is repeated three times, then start considering encapsulation. This technique is know as The Rule of Three.

Lazy class, dead code and speculative generality

Different names to call the same thing: unused code. If a piece of code is not enought meaningful or you are not using it or you don’t need it right now, get rid of it.

Don’t write code for future eventualities, like “yeah, well, let me create this interface called Poisonous just in case someday I want to add snakes to my game ’cause ya know”.

As my C teacher used to say, “good programmers are lazy programmers”.

Feature envy

When a method or class is continously accessing the data of another object, it would probably be happier having that data within it.

Middle man

A class whose only purpose consists in delegating to another class:

This class has no reason for existence.


Ok, so now that we’re done with code smells, it’s time to get into refactorings. I must warn you that refactoring is not an exact science. Each scene needs a personalized analysis and specific treatment. So I’m providing the tools, but yours is the duty of guessing which tool suits better your situation.

Extract method

Replace a piece of code by a method call:

This is useful not only to lighten up the code, but also to make it more meaningful, because if you gave a good name to the extracted method, you know what’s going on there just reading its call.

There’s a similar result when using extract variable.

Then, in the line of extract variable, we have Replace magic number with symbolic constant, which is used to hide a hardcoded number behind a constant that has a self-explanatory name:

Decompose conditional

This technique consists in replacing complex conditionals into understandable method calls, like this:


Extract class

Although the procedure is similar to extract method, there is an extra benefit, because this refactor allows you to disengage a functionality from a class that was not responsible of it.

Extract superclass / interface

Create shared superclasses and interfaces for classes that have common fields and methods.

Replace method with method object

When it is not possible to extract method because you have plenty of local variables that are mixed together, extract the method to a class and transform these vars to fields:

Move method

Move method and move field consist in moving a method or a field from one class to another. This could be a solution for feature envies.

Hide delegate / Remove middle man

If we take back our example:

MiddleCat is hiding kitten’s methods, meaning, a client that uses this class will never know about Kitten‘s methods because it will be forced to access them through MiddleCat.

This technique, called hide delegate, is useful and used in other cases, but not in this one.

Remove middle man deletes the pointless intermediate class, exposing the final methods directly to the client. In this example MiddleCat is beseeching to die, so we should apply Remove middle man for mercy.

Encapsulate field

Convert a public field to private and provide public methods to work with it.

The most interesting example of this refactor happens when we are working with collections, because it is kind of reckless allowing everyone to play with such a delicate data structure. So instead of exposing it, make it private and create methods for specific operations:

This way you have better control over how your collections (and all kind of fields) are used from the outside.

Hide method

Convert a public method to private or protected.

When programming a new class, I use to set the minimum visibility to its methods, and then relax this as external objects needs to access them. This way I make sure that there’s no indecorous access to my shy resources.

Consolidate conditional expression

Both consolidatre conditional expression and consolidate duplicated conditional fragments are reoganization of conditionals.

If several conditionals lead to the same result, unify them and write the result only once:

If all the conditionals share the same code, move it outside the conditional so everyone can reuse it without duplication:

Replace conditional with polymorphism

This is my favourite one. Imagine you have a conditional such as:

The main problem of this structure is that you will have to add cases each time you want to implement the caress for an animal, and as the game grows, this piece of code will become unmaintainable.

Polymorphism consist in creating classes with common interfaces, and delegating in these classes the responsibility of implementation. So for the example above, we first create classes for each branch of the switch, and a shared interface:

And then we replace the whole conditional by a method call:

IMG_20160814_203449Lovely, elegant and fluffy as a stuffed kitten.

Pull ups and push downs

We say pull up when a field, a method or a constructor is moved from a subclass to a superclass, and push up when a field or a method is moved back from a superclass to a subclass.



Knowledge and, especially, practical experience, are the best way to learn. But after all, refactor is a matter of hunches.

In my next post I’ll be speaking about design patterns, which are just a deeper level on the eternal wisdom of refactoring.

SOLID principles

I’m applying for a job in a frankly interesting company, so I’m reviewing all the good programming practices I was taught and learned during my career. Those are essential for developing good code, however an alarming number of “programmers” don’t really understand them (or actually never heard about them).

In this and subsequent posts I will try to explain fundamental coding principles, refactor techniques, design patterns and so, as clearly as possible, and probably using ailuromaniac examples. Today we’ll speak about the SOLID principles.

SOLID defines the five basic principles of object oriented programming. When applied together, these principles will make a software easily maintainable and extendable. wiki



S – Single responsibility principle wiki


A class should encapsulate only one responsibility, and that responsibility should be entirely encapsulated by the class.

Robert C. Martin relates the concept of “responsibility” with “reason to change”:

A class should have only one reason to change


Take a look at this class:

It is obviously bearing with more than one responsibility because it has methods for a lot of different and unrelated actions. For instance, setSkin has nothing to do with calculatePathToDestination, because the color of the kitten’s fur has no effect on the calculation of a path.

From another point of view, this class may be changing for several reasons: changing the implementation of skin, changing the implementation of any of the AI branches…

The right implementation encapsulates each responsibility within objects, and the class Kitten is only dedicated to gather those objects by composition:

Now each class will have its own reason for change, and Kitten will barely do.This also means that when a class changes, none of the others will be affected by this change.


O – Open/closed principle wiki


A class should be open for extension, but closed for modification. This means that your super classes should be enought generic to be extended by their childs as they are.


In the code below, the class Cow extends the class Animal and implements its abstact method eat:

This does not work, because cows are vegetarian, but we are forcing them to eat meat.

But if we abstract the type Meat into an interface, such as Food, not only cows, but all the animals will be able to use the eat method as they prefer:

Meaning, the Animal super class is enought generic to be extended by a large and diverse amount of creatures as it is, without modifications:

And they will be happy and forever live in harmony.



L – Liskov substitution principle wiki


In my opinion, this is one of the more complex and misunderstood principles, along with the dependency inversion principle.

It states that objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program. Also explained this way:

If S is a subtype  of T, then objects of type T may be replaced with objects of type S.

Yeah well, this is quite abstract, let’s see it on a practical example.


Here, our classes Kitten, Dog and Scorpion extend Animal and override its method pet():

If we try to replace Animal by Kitten, the system will fail, because the method pet() overriden by Kitten does not work in the same way as beHappy(). Same for Dog. And Scorpion does not even need the pet() method because seriously, did you ever try to pet a scorpion? Moreover, it breaks the YAGNI principle.

We cannot replace the type Animal by these subtypes. This violation can be solved extracting the pet() method from the superclass to an interface, this way:

So now you could replace the type Animal by any of its subtypes, and it will still work, because pet() is no longer Animal‘s method, but Touchable‘s method instead.


BUT pay attention, because the code below does NOT violate the Liskov principle:

This implementation respects the principle because, as long as the precondition gluttony = 0 is true, the class Animal can be replaced by its subtype  Kitten and the system will be delivering the same result.

One of the best things about Liskov’s principle is that by applying it you are unconsciously forced to program by composition instead of inheritance, which is also a very important principle.


I – Interface segregation principle wiki


Many client-specific interfaces are better than one general-purpose interface.

Your interfaces should be as small and specific as possible. This will allow you to reuse them and also to avoid forcing classes to depend on methods thay they do not need.


Here’s a “general-purpose” interface named IPhysicalEntity:

As you can see, it gathers everything that a physical entity may need. Here are some classes that implement this interface:

Notice that many methods were not actually needed. But if we split our huge interface into smaller ones, this way:

Then each class will only implement the interfaces that is needing at this moment, and we can get rid of unused code:

You can eventually add or remove interfaces from your classes as the specs change.

Personally, I like this principle because it makes classes more readable, for instance, extending the example above, you can quickly know the properties of an object:

It’s just like reading prose: Kittens can move (Mobile), can be petted (Touchable), can be set as a target (Targetable), and so on.


D – Dependency inversion principle wiki


Depend upon abstractions. Do not depend upon concretions.



This violates the rule because the object kitten is declared using a concrete type (Kitten). This is how you should have done:

We declare the object as a generic Animal, and then we assign to it an instance of the concrete type we want that animal to be.

It’s quite simple and actually I don’t think the problem with this principle is that it is misunderstood, but more like it requires a very much strong self-discipline. For instance:

Would you say that this code is SOLID?:

behavior is (very OK) declared as a generic type (Behavior). But then, in the constructor, we are assigning it an instance of KittenBehavior, which is a concretion. And we are giving this responsibilty to Kitten. From now on, our class Kitten will be eternally coupled to the KittenBehavior type.

But if we do this way:

And decide the type in the invocation:

Then Kitten will no longer depend on this concretion. This technique is called dependency injection, and allows us to indeed inject the concretion from outside the class, releasing it from a responsibility that does not deserve.

You could now change KittenBehavior by WhiteKittenBehavior, CatBehavior, or whatever XBehavior you’d like, yet Kitten will stay functional.



I’ve always been an organized person. Some people may call it OCD but well, I don’t think it’s so bad.

So you could say SOLID was an epiphany for me, because before we met I was very uncomfortable with my code but didn’t know exactly what was wrong with it. Now I know that EVERYTHING was wrong. From that day I’ve learned quite an amount of techniques and rules, and honestly, the more I discover, the dumber I feel.

Unfortunately what Kent Beck said here was true:

Adopt programming practices that “attract” correct code as a limit function, not as an absolute value. [...] you will create what mathematicians call an “iterative dynamic attractor”. This is a point in a state space that all flows converge on. Code is more likely to change for the better over time instead of for the worse; the attractor approaches correctness as a limit function.

The painfully implication of this statement is that, despite our efforts, our code will only pretend to be good, but never achieve absolute perfection.

But well, you will always be closer to perfection if you are well trained, so I’ll try to gather and demystify some of the key coding skills here. Besides, trying to clearly explain something is one of the best ways to realize that you actually don’t understand it.

In another vein, during this post I’ve been doing what we know as Refactor, a process through which the code is reestructurated without changing its external behavior, aiming to improve the quality of the software. I’ll be writing about it later in this blog, but if you feel intrigued read this.

Also in the incoming posts we’ll explain some design patterns and, buddy, you should know about them.

Find me at the Maker Day

Well this is escalating so quick! I’ve been invited to bring my talk about AI to the Gasteiz Maker Day. I feel very honored to take part in an event at my hometown ^^


The Maker Movement

The European Maker Week is an initiative promoted by European Commission aiming to attract people to the Maker world.

The Maker Movement is the name given to people coming from different backgrounds who dedicates to DIY and DIWO techniques and processes to develop unique and innovative technologies, products and solutions.

As an aspiring software craftswoman I feel very identified with the maker movement and strongly believe in its values and everything it stands for.



The event

The Gasteiz Maker Day will be held in Vitoria Gasteiz on June 4 (tomorrow!! :D) and has been organized by the guys from Gamaker. It is an open, free event. They will be offering a lot of cool activities, including worshops, talks and round tables. And there’s pizza for lunch!!

You can follow all the news about the event with the hashtag #GasteizMakerDay.

We’ll meet there! See you ^^

AI in games III – Pathfinding

This post is the third part of the talk I gave on the WTM and belongs to this root post.

We call pathfinding to the process by which the machine calculates the shortest or fastest path from its position to the position of its target.

Pathfinding sits on the border between Autonomous Movement and Decision Making, because usually we first decide where we want to go, then pathfinder works out how to get there, and finally we move to the goal using a movement control system.

Through this post we’ll see several ways and tools used for calculating a path.


1. Graph theory


Before getting into the algorithms, I’d like to explain three basic concepts from graph theory, just to get the jargon.

We will imagine our game map as something like this:


The whole thing will be called Graph. The round shapes are Nodes, and the black lines connecting each node to its neighbors will be called Connection. Math dudes may be a bit upset because they use other words for this concepts (edge, vertex…), so I’d like to clarify that I’m using this concrete words because they’re the ones used by computer science and, furthermore, by gdxAi.

So keep them in mind: Graph, Node, Connection.

Before using the algorithm, we must apply a discretization of the space, which consists in converting our infinite map in a set of finite nodes. For instance in the following image, I’ve used a grid to divide the space in cells:





Our map implements the Graph interface. It stores a list of all the nodes on the graph, as well as the methods needed to access them.

Each Node keeps its coordinates and a reference to its neighbors.

Finally, Connections contains a reference to the two nodes that it is connecting, and a cost. The cost can be whatever we want, usually the time or the distance between nodes, and this is the key indicator to verify that indeed our route is the shortest or the fastest.


2. Indexed A*


Now that we have a manageable number of nodes, we’ll apply our algorithm. There exist several ways to achieve this, but the most popular at the moment in video games is the Indexed A* algorithm.

Very briefly, this algorithm works like this: reusing the previous graph:


Imagine we are at node 1, and we’d like to go to the node 6.

First, we evaluate our neighbors, and select the one that is closer to the goal. In this case, node 5. Then we apply the same process to the selected node, the neighbor closer to the goal is node 4. We keep going until we reach the goal or until we get into a dead-end. The result, in this case, would be the node path [1, 5, 4, 6].



Once we’ve built our graph system, we can go ahead with the Pathfinder:


Use the PathFinder interface and implement its method searchNodePath. This method is fed by a starting node (our position), an end node (the goal), and a Heuristic.

The heuristic is the function that will tell us which node is the one that is “closer to the goal”, more or less accurately depending on our preference. Greater precision requires greater amount of resources, so find the harmony between accuracy and performance.

The outPath is passed as a parameter for performance reasons, but actually it is the path that will be calculated inside the method. The result is a GraphPath.


3. Hierarchical Pathfinding


Hierarchical pathfinding plans a route in much the same way as people would. You plan an overview route first, and then split it into stages. Each stage of the path will consist of another route plan.

I’ll take myself as an example. I live in Valencia now, but I was born in Vitoria. On holidays, I go back to visit my family and friends. How do I get in there?

First, I plan the general route:

  1. Drive the car to the train station.
  2. Take a train to Vitoria.
  3. Be driven home.

That’s what I have on my mind the day before.

The next day, when starting my journey, I start developing the first step:

1. Drive the car to the train station

1.1. Drive to the highway

1. 2. Take exit 5

1. 3. Drive the road to the train station

For now, I don’t care about what will I do when reaching the train station, I just focus on the step I’m performing.

When I reach the train station, I develop stage 2:

2. Take a train to Vitoria

2.1. Go to the ticket machine

2.2. Print the ticket

2.3. Look for the platform at the screens

2.4. Walk to the platform

2.5. Wait for the train

2.6. Go inside the train

Then I start developing stage 3 and so on, until I reach my goal.

This technique, called “Divide and Conquer“, consists in splitting a huge problem in smaller and more manageable chunks, allowing us to focus in only one chunk at a time.

This type of pathfinding provides a lot of advantages, among others:

  • Once we’ve plotted the general route, we can just focus on the current step, evading from the rest.
  • In video games, it usually happens that the target changes, that won’t be a problem for us because in that case, we just discard the remaining path from the step we were in, and calculate the new general path.
  • We are able to solve really complex paths, because we can divide them over and over until we can solve the next step directly.



The implementation is very much the same as the traditional pathfinder, actually we use the same interface. The only difference is that the graph will be classified by levels, meaning, our graph is now a HierarchicalGraph.

The pathfinding algorithm must be implemented in each level recursively. Use the setLevel method to switch the graph into each level.


4. References

I wrote this detailed post about pathfinder 2 months ago, so half of the work was fortunately done. Here‘s an amusing site where to play with pathfinding algorithms. Wikipedia and Amit are also very illustrative, particularly the latter has a dedicated shrine in my living room and I’d be proud to kiss his feet.

The package provided by gdxAi for Pathfinding implementation can be consulted here.


5. Overview

  • Graph theory is relevant. wiki
  • gdxAi uses the Indexed A* algorithm. wiki
  • Hierarchical Pathfinder divides to conquer. wiki

This is the end of the content of the talk, go back to the root post to see the recorded talk and the Prezi presentation ->




AI in games II – Decision making

This post is the second part of the talk I gave on the WTM and belongs to this root post.

We say that a machine is making decisions when it processes a set of information to generate an action that it wants to carry out.

In this section we’ll look at two of the most common decision making techniques used for games development: State machines and Behavior trees.

1. State Machines


This term was coined for the automata theory. A finite state machine (FSM) is a device which has a finite number of states it can be in at any given time and can operate on input to either make transitions from one state to another or to cause an output or action to take place. A FSM can only be in one state at any moment in time.

So in its purest form it consists of three elements: States, transitions and events. If you look at the following diagram:


This is a state machine that manages the movement of a hero on a game. His states are: standing, jumping, ducking, and diving. The machine can only be in one state at a time.

He can’t move from one state to another randomly, instead, each state has a set of transitions associated. In our example, to start ducking, he must first be at the standing state.

Finally, we call events to the sequence of inputs that are sent to the machine from the user (button presses and releases).

This FSM allows us to change the behavior of the hero, i.e., it causes the behavior of the entity mute depending on its internal state.


Concurrent FSM

State machines can be concurrent, f.i., we could be using this one for movement, another one for combat, and another one for communication, allowing the hero to move, fight and swear at the same time.


Hierarchical FSM

State machines can also be hierarchical. They consists in a set of super states and sub states arranged in a tree structure. Events entered by the user runs down along the tree until a state is able to manage it. In the image below, the user inputs the press trigger event into the system. The super state combat cannot handle it, so it delegates the event to its child states. The child state firing is able to manage this event.




maquinasdeestadosThe entity must maintain an instance to the state machine, and implement an update method that calls the machine itself. After that, everything is delegated to our implementation of the StateMachine, which will manage logic and transitions between States.


2. Pushdown automaton

There’s one issue that traditional state machines have, and that’s their lack of memory. FSMs don’t keep a history. However, this could be useful sometimes, for instance, let’s imagine we want our hero to be able to shoot. Well, starting to shoot isn’t a big deal, he just transitions to the firing state from whichever state he is.

But, what will happen after the firing sequence is done? We don’t know, because we don’t remember in which state he was before starting to shoot.

This can be solved using a pushdown automaton. This FSM version stores states in a LIFO stack. The current state of the character will always be the last entered state. So in our example:


When the hero is about to start shooting, the firing state is pushed into the stack. The firing sequence is executed, and then, when it finishes, it pops out from the stack. The other states in the stack will emerge, making the character go back to his previous state.



Read gdxAi wiki for further information. Also here‘s the best post about FSM I’ve ever read so far. gdxAi provides this package for FSM management.


3. Behavior Trees

For the last few years, behavior trees (BT) are the major formalism used in game industry to build complex AI behaviors. This success comes from the simplicity to understand, use and develop BT by non programmers.

Behavior trees have been originally pioneered by robotics and soon adopted for controlling AI behaviors in commercial games such as first-person-shooter Halo 2 and life-simulation game Spore.

As its name suggests, we could imagine them as a tree, with its leafs and its branches, where leafs contain actions which describe the overall behavior, and branches decide the relationships between leafs.



The key concept is that the leafs contain both their internal state and the logic of the actions. So to use a leaf, we first evaluate its internal state, and, depending on the result, we decide if we want to execute the action or not. Leafs will return two different states: Success and Failure.



There are several types of branches.

Selector branch iterates leafs until one of them returns Success, then it executes the logic inside that leaf and the tree is considered a success. As you can see here:


Our goal is to reach a safe spot, so we try to make different actions until one of them succeeds. First, we try to take cover. But there’s no hiding nearby, so the leaf returns failure. Then, we try to leave the risk area, but we can’t because the field around is too large. This leaf also returns failure. And so on, until one of the leafs succeeds, or until we run out of possibilities, in that case the system returns gdxAi’s native leaf Failure and the whole tree is given by failed.

To give an end to our tale, let’s imagine we asked the squad for cover fire, and they were available in range, the leaf ask for cover fire would have returned success, the action would have been performed, our hero would have been able to reach the safe spot, and the tree would have merrily given by succeeded. Cool.


The Sequence branch, on the contrary, evaluates the leafs until one of them returns Failure.


First, we look around searching for a safe spot. If we can find it, the leaf returns success. Then, we try to run to the spot. The leaf returns success and the action of running is executed. Finally, we do a barrel roll and get into cover. As far as everything goes well, each leaf executes its logic, one after another, until there are no more leafs to evaluate.

The tree will be considered a failure if any of the leafs on the sequence fails.

You could also execute leafs randomly using Random Selector and Random Sequence branches.

Also, use Parallel branch for concurrent and non-conflicting behaviors or for group behaviors:




gdxAi recommends to implement behavior trees in a external way, either using formatted text or external libraries, and then everything is translated into Tasks.

Here’s my own try of behavior tree, based on davebaol’s example:

Yeah, it’s the behavior of a kitten, so what ¬¬

Focus on the third level of the tree:

  • Parallel (lines 4 and 5) the kitten will simultaneously play and attempt to be distracted. There’s a 0.8 probability of getting distracted, so eventually this leaf will succeed.
  • Sequence (lines 7 to 9) the kitten meows 3 times, then walks, then destroys something.
  • Selector (lines 11 and 12) simulates a nap that will stop each 5 seconds.


4. Conclusion

The main problem with State Machines is the exponential growth of states and transitions. Even worse, states cannot be reused easily, without having to worry about transitions being invalid when they are reused for different portions of the AI logic. Essentially, State Machines lack a proper level of modularity.

Behavior Trees increase such modularity by encapsulating logic transparently within the states, making states nested within each other and thus forming a tree-like structure, and restricting transitions to only these nested states.

Therefore, we will use State Machines only for when we have a small and probably immutable amount of states and transitions.

In brief:

  • State machines reacts to events making transitions between states. wiki
  • Pushdown automaton keeps a stack of states. wiki
  • Behavior trees are composed by leafs and branches. Leafs contain both state and logic, and branches decide relationships. wiki

Keep going! In the next post we’ll speak about pathfinding. ->




AI in games I – Autonomous movement

This post is the first part of the talk I gave on the WTM and belongs to this root post.

One of the most fundamental requirements of artificial intelligence is to move characters around in the game sensibly. I mean, how to make a dude move on his own.

At this point we have two options: Steering behaviors and Formations.

1. Steering Behaviors


Steering behaviors are a set of algorithms that allow a character to move across the environment in different ways, depending on a strategy.



So, f.i., a monster who follows you when you walk close to him uses a pursuing strategy. When you run outside his range, he will walk back to the point where he was before you bothered him, employing an arriving strategy. Once he’s there, he will implement a wandering strategy, which will make him move randomly inside a scope.

Strategies can also be combined, for example, the enemy could be pursuing you while evading the walls of the dungeon.

The gifs below show how some of these strategies look like using my beloved gdx-ai tests:

Arrive behavior:


Pursue behavior:

pursueLegal advice: I’m not taking any responsibilities for any harm that octopuses (or similar creatures) may cause when provided with AI. I just do the math.

You can also combine several behaviors using the BlendedSteering behavior.

I’ve shared the screencasts of this behaviors in my youtube channel: Arrive, wander, pursue, BlendedSteering.


Group strategies

Steering behaviors can be applied both for individual and groups. The only difference is that while individuals need a target, groups use a proximity area.

For instance, in the image below, we’re applying a cohesion behavior, which is a group behavior producing a linear acceleration that attempts to move the agent towards the center of mass of the agents in its immediate area defined by the given proximity.


By contrast, separation behavior produces a steering acceleration repelling from the other neighbors within the proximity area:



For implementation we use the calculateSteering method of the SteeringBehavior class, which receives a Steerable agent (the entity to which we want to apply a behavior), and returns the calculated SteeringAcceleration. This output encapsulates both linear and angular accelerations for you to use them on your physics engine.



I’ve written several posts about steering behaviors in the past, find them here, here and here. You can also check davebaol’s tests here and read a deeper explanation at the wiki.


2. Formation Motion


We consider a formation when a group of individuals move in a cohesive way.

gdxAi manages formations using so-called “slots”. Those entities willing to belong to the formation must be assigned a slot. One of the slots is set as the leader, and all the other slots in the formation are defined relative to this slot. Effectively, it defines the “zero” for position and orientation in the formation.

In the following image, the red spacecraft is the leader slot. When it moves, the other spacecrafts will follow it keeping in the V formation pattern:



The leader slot can be replaced by an anchor point if needed, which is just an invisible point with an associated position. For instance in the image below, the cursor has only his position, and will not interact in any way with its environment, even though the kitten formation will follow it:




Implementation is (very) summarized in the following chart:


The Formation class is provided with a FormationPattern. The formation pattern represents the shape of the formation. For instance, in the example above, we were using a V formation pattern.

The Formation class does the magic and outputs a set of slots. These slots must be assigned to the member candidates using a SlotAssignmentStrategy. This assignment strategy will change depending on the role of each slot, but that’s another story.



For a better understanding, look for this subject at the gdxAi wiki. All the tools provided by the framework for formation management can be found here.

3. Conclusion

So what we have so far is:

  • Steering Behaviors using strategies.  wiki
  • Formation motion using slots. wiki

Keep going! In the next post we’ll speak about decision making. ->


How to program a robot to dominate the world


robot_blackHey everybody! Today I attended as speaker to the WTM event in Castellón, and I gave a 30 minutes talk about artificial intelligence and how to implement it in video games using gdxAi.

And so I tought it would be interesting to write the content of that talk in a post. This will allow me to develop an explanation about what AI is, and at the same time I’ll be able to link my sources and embed the recorded talk and the Prezi. Let’s do this!



As mentioned above, I’m using gdxAi for implementation, which it’s an artificial intelligence framework written in Java for game development with LibGDX. Their wiki is a great starting guide, but if you consider yourself a proud programmer, I strongly recommend you to download their code from the github repository and jump into these tests headfirst.



I started writing a single post containing everything, but I thought its enormity would discourage readers, so I divided it in three more affordable posts.

Artificial Intelligence is the ability of a machine to solve a problem by itself. This includes a wide range of techniques, but let’s focus on the most common ones (click on the links to read the content):

- Autonomous movement

- Decision making

- Pathfinding


The talk

Here’s the video for the talk:

Kindly recorded and edited by the staff from decharlas.

The presentation used for the talk is available on Prezi, there’s an English and a Spanish version of it.



I’ve used a lot of external references on the posts above, mainly obtained from the gdxAi wiki. You can also check a list of all the references mentioned on this talk and on the whole blog in the Bibliography page.



1 2