Wednesday, October 1, 2014

Program your shaders in Webfunge

I released Webfunge at the Evoke a couple of weeks ago. I made it because on the previous Revision, someone made a shader interpreter in Scheme, and wow! If it's possible in Scheme then it's possible in Befunge. This is the perfect opportunity to finally make a project in Befunge.

I have always loved Befunge for three reasons. The most important one is that code in Befunge lives in a n-dimension space, whereas all other languages, as far as I know, are only one-dimensional. It's a modular storytelling programming language! Code has a topology, and unique patterns emerge that couldn't exist with any other language.

The second reason is that it's based on a stack. It means that everything happens with a stack of values, and computations are eventually stack manipulations. For instance, the add instruction pops two values off the stack, and pushes back the sum. Most code written today falls in the C-family, that doesn't compute that way, so it's really refreshing. Lua uses a similar, although simpler stack, and it has been an inspiration source for Webfunge.

The last reason I love Befunge is that people don't understand someone could love it. Eventually I fell in love when I read this code:
12481> #+?\# _.@
which is a 1..16 random number generator. If ever you try to make sense of this, beware! It's a trap!

Webfunge isn't compliant with the specs of Befunge. For one, some instructions are missing from Befunge, for instance replace in stack (which pops an index off the stack, then pops a value, then sets the stack at index to this value) doesn't exist in Befunge, whereas it exists in Lua. Moreover, shader code heavily relies on number and vector operations, so I added some convenient instructions to deal with them. Obviously I hadn't planned these changes right from the start, I started the project by doing a simple interpreter. And at the end, it can do all kind of shader stuff, like raytracing.

Once the Proof Of Concept was validated, it had been optimized a lot. The main tips are:
  • don't use an Array because they are type agnostic and it's slow, use all sort of UInt8Array and Float32Array with an index pointing to the last element of the array
  • avoid allocations (i.e. garbage collections)
  • use Web Workers to distribute the load
It's a lot faster than at the beginning, however it will never perform as fast as WebGL shaders...

I even started a static compiler, but it would take me too much time to finish it. For now it can convert
0y2y/ 1y3y/ 4y @
function getColor(x, y, offset) {
  output[offset    ] = (x/width);
  output[offset + 1] = (y/height);
  output[offset + 2] = time;
  output[offset + 3] = 256;
which uses the original interpreter's variables.

The basic idea is to build an AST and generate JS code when the IP path is known, and to bail out to the original interpreter otherwise. Some ideas:
  1. A path is defined after the IP's delta is set, in particular using >, v, >, and ^. Jumps to subroutines also define paths. There is also a path at (0, 0) toward east, as it's the beginning of the program.
  2. Each path is executed inside a case of a switch block. If the path ends with a change to the IP's delta, then loop on the switch again, otherwise bail out to the interpreter.
  3. Stack values have type constant or expression. Constants are useful for reducing code complexity. In the code above, the y instruction popped the constant value 0, thus acceded stack[0] which contains the expression value x, so the result of 0y is x.
  4. If it's impossible to bail out, remove the interpreter part. It would even be able to compile into GLSL, but then, none of the unique features of Webfunge would have been used, so there's no point doing it.
If you want to continue this, feel free! Take a look at the repo on Github

Wednesday, March 12, 2014

Moooo, my website gets a new design

For the last two weeks I’ve been redesigning my personal website. And it's now available on! If you’ve already seen it before, you know it definitively needed a new design. Have fun to compare: a screenshot is still available at

The design spotlights the Alps, where I was born. Reblochon is a traditional cheese, often baked with potatoes and cream to prepare Tartiflette. The brown cow represents the Tarine race, the pandemic cow from the Tarentaise valley, whose milk produces awesome cheese such as the Beaufort cheese. The black quads are slate boards used to serve cheese, over a picnic plaid. I admit slate is no tradition in my region, but I like it anyway.

I’m especially proud to have created the graphical design, provided that I am bad at that! I learned to inspire from other modern sites and to carefully understand where their strengths are. Eventually my cousin provided me a lot of good advice and I deeply thank him.

I also want to be more active on the social networks. Therefore this blog has got a quick redesign too! I hope to take more time to share analyses on game design and media art. Maybe I will continue on my French blog as well.

But the most important is by far to produce games again! By the way, you can access a hidden game by clicking on the central cheese, or with this url: It would be so funny if it were an actual Alpine tradition!

Friday, March 30, 2012

Flying Stock Exchange

I've always thought that traders were just kids playing with paper planes inside huge buildings, drawing some meaningless lines that rules the world. Well, it might be true.

Play online now (best enjoyed under Chrome)

Flying Stock Exchange is my submission to this month Experimental Gameplay Project's theme Economy. The game illustrates the paradox of over-seriousness and great liberty in the financial industry.

The first playable version had an horizontal gameplay: the plane had a speed, suffered friction and even stalled on low speed, but could speed up again thanks to bonus. I haven't liked it. I've wanted to focus on the vertical gameplay, allowing to plot a chaotic price curve. The horizontal axis is a timeline and that makes no sense to manipulate it through acceleration or friction. Therefore the plane has a constant speed. There is no progression in the game's complexity: it is eventually possible to gain money by collecting every bonus item, but doing so is quite difficult and the player always looses money on average. The score is thus the delay she manages to keep before the inevitable crash.

I used LimeJS again. The game is really simple, it just has a small number of nodes, animated within a game loop. Everything is rendered with DOM elements, except for the trail, which is dynamic and therefore should be drawn with a canvas.

LimeJS is a framework, and as I made the game within three days, everything is hardcoded. Last weekend I heard about CraftyJS, which is more like a game engine: there are entities, components, events. I'm really interested in doing a project with it. A clear advantage of LimeJS comes in its animation handlers, that's why I mean that it is very suitable for user interfaces, not games in particular.

I've wanted to represent traders as children playing with paper planes, observing their flight to speculate on the market. If you're a trader, don't take it personally, I know you're not allowed to throw a paper plane in your office! Mouaaahaahaah! Anyway the guy was well-dressed in the first version, but I really wanted to emphasis on the childhood. At the end he's cute in diaper, don't you think?

At first the background was very desaturated, almost white, but after an argument about it with a friend, colors are now brighter. The blue background contrasts well with the green and red foreground and the orange kid.

The "music", actually just a synthetic wind, gives a sensation of calm and liberty. The chimes playing when collecting a bonus reinforces the childhood side. Note that their is no advanced mechanism handling audio in LimeJS and we have to trick volume in the game loop to emulate fades.

Thursday, March 15, 2012

Blur effect

What I said in my previous post about shaders in D is sadly not up-to-date, so I want to bring something new. Here is a little project testing
  • Derelict3
  • OpenGL 3+ API
  • Multipass post-processing effect via shaders and render-to-texture
  • D builder for SCons
  • SCons targets

Eventually, it results as a program blurring a restricted zone of an image.

Blurred cow on a sharp photo.

Get the project on GitHub.



Derelict3 is the new version of Derelict, a project providing wrappers in D for C libraries. It is though an alpha version, so consider using Derelict2 if you really want to pacify your boss.

The hearth of this project is DerelictGL3, which wraps OpenGL up to the latest version 4.2. I'll take about OpenGL in the next section.

In order to use OpenGL, we have to create a window and its OpenGL context. Derelict3 provides two libraries: SDL2 or GLFW3. Both are new versions in development, so once again, the code may break up at any moment. As I used DerelictSDL last time, I tried to stay on this library. That was a bad idea: SDL2 is not able to open a OpenGL 3.2+ context for now. So I give DerelictGLFW3 a try. This time it works up to OpenGL 3.3, my graphics card doesn't seem to accept any higher version. Both libraries have somewhat similar concepts, so it was not a big deal.

I finally used DerelictIL, a wrapper of the image library DevIL, to load a picture as OpenGL texture. Loading is then really easy, it only consists of a function call. Again, this may not fit to production environments, as the textures should be compressed on some low-level format which are much more efficient. But for test purposes, it comes in handy.

The wrapper code is very lightweight, it's easy to dig into files to find functions' prototype or constants. It also seems easy to make a wrapper over any library, next time I'll try that. However I regret that the files contain no documentation, and even no name for the function parameters. As they wrap libraries in development, prototypes may change compared with the libraries' official documentation. Then don't be surprise if it doesn't match.

OpenGL 3+

I was playing with OpenGL at a time where shaders didn't exist. As new versions come, the API changes, sometimes radically. As such, glBegin and glEnd are now deprecated, view and projection matrices are now handled by shaders... well, let's learn the modern OpenGL from scratch. Sadly, a lot of resources on Internet still use old methods, or a mix of old an new ones...

I don't plan to deliver for the mass market, so I target version 3.3 (but I'll understand if you target a lower version). Shaders have also to declare GLSL version 330.

My goal was only to blur parts of an image, but this can be extended to simulate depth of field for example. This is entirely a post-processing effects, so it involves shaders. The basic idea for the effect is to get a blurry version of the image, and to mix it with the original, sharp version, with a blending factor depending on the position of the pixels (in my case, it is related to the mouse cursor position). This is obtained in one pass, but getting a blurry photo itself involves two passes (explanations), so we end up with three passes: horizontal blur, vertical blur, and mix.

When involving several passes, the output of a pass shall be the input of the next pass, except for the last pass which can render directly on the screen. This is known as Render to Texture (RTT for short), indeed the output of a pass is written to a texture, which in turn will feed the next pass. RTT requires a framebuffer defining the output textures. In this case, there is only a single output, the color of each pixel, but one may also define the depth buffer or render on several textures. I don't want to explain further, a lot of tutorials already exist.

Feeding a pass with a texture eventually means simply apply the texture on a quad covering the whole screen space. I used to draw quad with four vertices, but a friend told me a simple triangle large enough could cover the screen as well, even being better for the graphical pipeline (not verified). To render faces using the new API, we need to set a vertex buffer object, an UV buffer object and an index buffer object. On top of that, a vertex array object retains the states, so it will be easier to restore them in the rendering loop. Again, I don't explain in detail, lots of sites already do. The loop is really straightforward: bind a framebuffer, the correct texture, and a shader program, render the quad and voila the texture contains data for the next pass.

Shaders are not complicated. Vertex shaders don't do anything related to the post-processing effect in itself, everything is in the fragment shaders. Below is the code for the horizontal blur, inspired from an article on The shaders in that article have a little flaw: the sum of all coefficients is 0.98, resulting in a small but noticeable darkening. The following code fixes this problem by adding two new samples with coefficient 0.01.
#version 330

in vec2 texCoord;

out vec4 fragColor;

uniform sampler2D texture;
uniform vec2 resolution;

void main() {
    float blurSize = 1.0 / resolution.x;
    vec4 sum = vec4(0.0);
    sum += texture2D(texture, vec2(texCoord.x - 5.0 * blurSize, texCoord.y)) * 0.01;
    sum += texture2D(texture, vec2(texCoord.x - 4.0 * blurSize, texCoord.y)) * 0.05;
    sum += texture2D(texture, vec2(texCoord.x - 3.0 * blurSize, texCoord.y)) * 0.09;
    sum += texture2D(texture, vec2(texCoord.x - 2.0 * blurSize, texCoord.y)) * 0.12;
    sum += texture2D(texture, vec2(texCoord.x -       blurSize, texCoord.y)) * 0.15;
    sum += texture2D(texture, vec2(texCoord.x                 , texCoord.y)) * 0.16;
    sum += texture2D(texture, vec2(texCoord.x +       blurSize, texCoord.y)) * 0.15;
    sum += texture2D(texture, vec2(texCoord.x + 2.0 * blurSize, texCoord.y)) * 0.12;
    sum += texture2D(texture, vec2(texCoord.x + 3.0 * blurSize, texCoord.y)) * 0.09;
    sum += texture2D(texture, vec2(texCoord.x + 4.0 * blurSize, texCoord.y)) * 0.05;
    sum += texture2D(texture, vec2(texCoord.x + 5.0 * blurSize, texCoord.y)) * 0.01;
    fragColor = sum;
As I write, I remark that blurSize is constant for all fragments so that I could have made the calculation in the vertex shader. Anyway it's not very optimized.


I used DSSS last time. This project hasn't been updated for years, still compiles using D1 (discontinued by end 2012)... so let's try something else. As I use SCons for my C++ projects, I've though it would be great to use the same system. Luckily, it has built-in support for D!

Building a D program is the same as for a C++ program:
env.Program('bin/program', Glob('src/*.d'))
Really easy. The compilation flags for DMD are however quite different than GCC, pay attention.

Another thing I've never done is to specify targets, that is, being able to run 'scons release' or 'scons debug'. This is well-handled with Alias, the trick is to retrieve the return value of Program or any other builder. I've separated each target in its own SConscript, partly because of the VariantDir. To pass a value through a SConscript, use Return. Besides, 'scons .' builds everything, so 'all' is a simple alias to '.'.
debug = SConscript('SConscript.debug', variant_dir='build/debug', duplicate=0)
release = SConscript('SConscript.release', variant_dir='build/release', duplicate=0)

Alias('all', '.')
Alias('debug', debug)
Alias('release', release)

That's all for this project!

Tuesday, January 31, 2012

Veggie Might

As I announced last week, I participed once again in the Global Game Jam. Sadly, our game was not ready at the end of the 48 hours, but there are still some lessons to draw. Moreover, we are planning to continue its development, so version 2 would be presentable.

The game is entitled Veggie Might. This is a MMO game where armies of Baron Broccoli and Count Carrots fight for... well, nothing at this time, but we've thought to bases to take over in a Domination mode. Anyway, we wanted to represent a huge, endless battle a la Lord of the Rings, and this is the link with this year's theme Ouroboros.

We used HTML5 and WebSockets. The idea is to let the players enter the battle as soon as possible. As such, we don't wanted them to download and install the game, it would have been an extreme barrier to the player amount. Therefore with HTML5, they just have to visit a web page and they're ready to go. Moreover, the other programmer and I were pretty new to this world of real-time online games, so we were willing to cross the gap.

The server was programmed in Java. I wanted rather to use Node.js to iterate faster, but I was angry whether it would support the overhead of 1024 players. Maybe it wasn't the proper question at the time, but anyway, it worked. The server is basically in charge of relaying one player's messages to the others (we didn't bother that much with security).

Server and clients communicate via WebSockets, on the server we used TooTallNate's WebSocket library. It works as expected, that is, the user sends and receives messages like with any other network library. Our messages were JSON encoded, there is no need to design a binary protocol in a prototyping phase.

The client uses LimeJS, a framework handling graphical scenes, events and sounds. The scene is a tree of nodes, Sprites being a particular node type. This translates well into HTML where the scene is a tree of DIV, and is very similar to Flash programming.

LimeJS is actually a library to use with Closure. Basically it is a JavaScript to JavaScript compiler, and that reminds me my work on libfirm (and that I also have to continue this work...). It adds a system for including other files that JavaScript misses a lot.

I wanted to do the client side to see the problems related to prediction and collisions. I've already done a project handling these problems, but wasn't in charge of that part. I guess it's useless to tell right now how I've done it as it's not working properly. One thing to note, however: I worked with a mockup server, actually a false WebSocket interface which simulates a server. This was very practical since I didn't have to wait the features on the server to be ready before making my own code. Therefore I'll always recommend to work with mockups on projects related to network.

The artist worked with an incredible interactive pen display. He drew in Flash, allowing to quickly see animations. The exported SWF sequence was transformed into a PNG sprite sheet with SWFSheet. The anchor point was buggy and I had to handle it outside of the animation controller.

I recommend two games from my location: Quadd for the hardcore gamers, and Backdraft for the softer ones. Both of them were made with Unity3D. A programmer on Backdraft told me that Unity3D was hard to use for a pure 2D game. Special ovation for the group from Toulouse which has spent the whole event doing paper prototyping.

A last word about the event: you, as a creator (programmer as well as artist), really have to attend to this kind of events. You'll learn a lot. And don't try to stay awake, particularly at the beginning. By sleeping, the brain will continue to work on your ideas, and this is much more efficient.

As I said, we are willing to continue the development, so stay tuned!

Thursday, January 26, 2012


I had to participate in the current Experimental Gameplay Project's competition: some games will be shown at the Stattbad Gallery in Berlin! Bunt is my prototype in response to the theme Five buttons.

Windows users: play this game with no installation! (quick play)
Mac and Linux users: huh, you still need to install Löve2D and play this version.

Bunt means multicolored in German. Indeed the game is all about interactivity with colored objects. In addition, bunt is four letters long, and the game uses four primary colors. Each key from 1 to 4 is assigned a color. By pressing a key, the player interacts with objects of the corresponding color. The goal is to touch every star in the level to go to the next one.

What are those interactions? Something simple, easy to understand, funny to use, and ultimately allowing to solve crazy puzzles. It was important that each object has a different interaction, otherwise it would be like "eh look, something new! But actually you already know it." During playtests for a previous unreleased prototype, we've seen that players enjoy discovering new elements of gameplay. This is an extreme illustration of this concept.

I started the competition with the intention to develop a puzzle game where rhythm helps to solve the levels. The base would be little pixels a la 8bit depicting sound frequencies. Keys would allow to interact with these pixels, to move them in different directions. But then I thought it would be very limited. So, let's add more things to interact with. Hey it's cool! Let's go further, all the buttons do the same interaction. And eventually I dropped the 8bit stuff for a colored design. So, quite different from my first thoughts.

Press blue or yellow and the crate will fall.

Levels present a simple situation. The player should immediately understand what to do to reach the stars. For instance, the second level contained just the frog at the beginning. Two friends tested the game and were stuck here. At the second level?! Damn, was it not so obvious that a frog can jump in rhythm? No it wasn't. So I added a little trampoline, and it became evident that the frog could jump with impetus (it was also more logical, a frog can't gain impetus just by itself). Levels are also designed to be short. Well, the last ones may require some experience, but the first ones are very short, and this creates a little addiction to the game in the same way that WarioWares.

Level design actually starts before the first level. The first thing the player sees is the menu. "Play the game with these keys", each key has a color, there are colored balloons... let's press 2. Oh, the balloon blows up! This is the very first understanding of the gameplay: pressing a key will blow up balloons of the corresponding color. The first level comforts this idea, and introduces the star (it may not be sure what the star actually does, but the next levels will ensure that they have to be touched by something). The second level has a frog instead, and introduces the notion of pressing keys several times on precise moments. The third one uses another color, showing that the principle are the same for all keys, and needs the player to press the key for a finite duration. And so on... almost each level adds some element to the game.

The player should quickly understand that he can restart the level when he can't reach the stars anymore, but that this is no punishment. At first, I made a in-game menu, appearing when the player presses 5. Then, the key 2 allowed to restart the level. A friend told me it was too long and besides the menu was useless. True. So let's just press 5 to restart, and press longer to quit.

Colors were chosen mainly to change between levels. In addition, there is a physical constraint: the game is meant to be played with five big grounded plots, and players can't press both on 1 and 4 at the same time. Therefore keys are often used in pairs. Moreover, the most difficult levels use keys on the right, because the Restart key is closer.

Once again, I've developed this prototype with Löve2D. As this game heavily relies on physics, I've used Box2D (included in Löve2D). I've understood why Magnetic glues sometimes bugs (I won't fix it anyway). The reason is well documented: "Using Shape:destroy when there is an active remove callback can lead to a crash. It is possible to work around this issue by only destroying object that are not in active contact with anything." but I haven't clearly understood what it means. Simply do not delete any shape which is in contact. The solution I've chosen is to remove entities by moving them outside the screen and to save them in a separate list until they are no contact with them (therefore I increment a variable in the add callback and decrement it in the remove callback).

Entities are just containers for a set of components. Almost all objects have a body component (physics) and a sprite component, furthermore the interactive objects have a logic component which basically allows for custom scripts related to interaction. But... I feel like it misses something. On Magnetic glues I've separated the concepts of behavior and game object, but I've added a huge overload with a signal-slot system... Needless to say, I've sometimes hacked the game engine (aka break the encapsulation) to keep things simpler. The last level with the puzzle is a hack of both the game engine and game principles, but I've thought I need to break the game before the end, to calm down the player.

The graphics were made with Inkscape. The background was inspired from Angry Birds. The green, open landscape adds nothing to the level itself, yet it subconsciously relieves the player. For me too, because finding and designing objects for mostly two uses is exhausting... The little GFX and SFX clues were very easy to add, yet it makes the game more attractive. Although I'm still not proud of the sound design and I'll work on that on the future prototypes.

As a last anecdote, there are two alternative control schemes. The first is to use keys from F1 to F5, because I've wanted to play the game like a guitarist in Frets on Fire. It also seems that Löve doesn't support the keys 1 to 5 very well under Linux, but I currently can't verify. The second input method is with a Xbox controller, as it uses the same colors for its buttons. Any other button restarts the level. By the way, the order of colors (blue, green, yellow, red) was chosen because of the button disposition on the controller and the Dance Dance Revolution transfer function.

Don't forget: Global Game Jam next week-end!

Friday, December 23, 2011

Revenge of the Christmas tree

With some people of the GameLab Karlsruhe, we made the game Revenge of the Christmas tree for the exhibition Oh Tannenbaum! which takes place in the Karlsruhe University of Art and Design. The development was especially interesting since we made it within five days with a dozen of person with a very relaxed project management.

Download the game for Windows or Mac.

The game is a beat'em up, where players control Christmas trees fighting against students. At the end, player characters face the Übertree, a huge, nasty Christmas tree, who set university's directors in captivity. Well, don't consider the story that much! Nonetheless, we won a special award for merits in ecological sustainability and deceleration, but I guess it's a joke from the jury ;)

The game has been created with OpenBOR. It's a game engine specialized for beat'em ups, having ports on Windows, Mac... and several consoles such as Dreamcast, Wii and PSP! In addition, it does not require coding skills (though coding C-style scripts improves engine's default behaviors), so it may be a good start if you want to make games on such platforms. Although I have to say that you'll need good pixel artists...

Programming a level is quite easy: basically it consists in defining the position of each entity (enemies, obstacles, and so on) in plain text files. The engine already handles the AI for the enemies. But the supertree had predefined attack patterns and required a script to completely bypass the movements given by the engine. We coded a simple finite state machine, allowing it to move, to switch animation or to drop bombs.

The graphics are GIF files. Images for an entity must have the same color palette to save ROM space and processing time (the engine focuses a lot on performance because it targets limited platforms). As always, an entity is defined in a text file and basically contains a list of available animations (which are themselves defined as a list of images). An offset can be declared with each image, that way they are consistent with the logical position of their entity. In the same way, images can have bounding boxes, attack boxes... While a simple image editor may suffice for these images, cut-scenes are animated GIF and an animation editor would be useful.

We really lacked communication. It began as a classic Waterfall project (I was not the manager!) where artists deliver assets to be used by coders to build the game (by the way, coders design the level). The first graphics we received were PNG and as I said we needed GIF. Shadows were drawn on the images, but the engine already has its own ones! Even after that all artists have switched to GIF, the palettes were not coherent and I had to handle this problem until the end. Well, it would have been better when the artists would have asked to the coders details about what they have to do before (agile!). Hopefully, after figuring out these issues, the project ran better and we achieved to release the game for the exhibition.

By the way, Jen and Jan have made their own game, Schmück-Shooter, playable online. It's a third-person Christmas tree shooter, where the player has to decorate Christmas tree with a gun throwing Christmas balls and gifts. Nasty trees appear randomly and quickly move toward the player character. Sadly, nothing highlights these appearances and the players were very confused. It has been developed with Unity3D, allowing to make it quickly... and Playmaker, to develop even faster. Playmaker is a visual state machine editor, allowing to drive logic with blocks and arrows instead of lines of code. I haven't tried it yet, but visual scripting is so more convenient for game designers... Available in the Unity Asset Store.

Sunday, November 20, 2011


I'm proud to release my own programming language Kaleidoscope! With this powerful language, you can define functions, which can call another functions, add or multiply values, and... well, that's it. But look at this fabulous example:
# imports
extern input()
extern print(v)

# magic function
def transform(a, b)
    a * a + 2 * a * b + b * b

    print(transform(input(), input()))
Just kidding. This project is only a tutorial, and aims to use libfirm, "a library that provides an intermediate representation and optimisations for compilers. Programs are represented in a graph based SSA form." Since I'm writing my master thesis on this theme, I've wanted to dig into a tutorial to understand the core concepts.

Moreover, I used Bison / Flex for the first time, and this is pretty fun to program!

Note: Kaleidoscope comes from a tutorial written by the authors of libfirm, which is sadly not up-to-date with the latest versions of the library.

Parsing the code

The code needs to be cut into semantic tokens: it's the role of Flex. It is a lexical analyzer generator, that is, you define token rules, and it creates a C function which can match these rules.

For instance, given the rules
    { return '+'; }

    { return DOUBLE_LITERAL; }

    { return IDENTIFIER; }
Flex will produce a function which transform the following code
foobar + 42
into this series of token
Quite easy. Then comes Bison, a parser generator. Once again, you define rules to translate tokens into C code, and the generated parser will do the job. Here is an extract of the rules I use:
            // constant value
            num_expr_t *expr = (num_expr_t *)malloc(sizeof(num_expr_t));
            expr->next = NULL;
            expr->which = EXPR_NUM;
            expr->val = $1;
            $$ = (expr_t *)expr;
            // variable
            id_expr_t *expr = (id_expr_t *)malloc(sizeof(id_expr_t));
            expr->next = NULL;
            expr->which = EXPR_ID;
            expr->name = $1;
            $$ = (expr_t *)expr;
    | expr '+' expr
            // addition
            bin_expr_t *expr = (bin_expr_t *)malloc(sizeof(bin_expr_t));
            expr->next = NULL;
            expr->which = EXPR_BIN;
            expr->op = '+';
            expr->lhs = $1;
            expr->rhs = $3;
            $$ = (expr_t *)expr;
which parses an expression.

As you see, I do not generate the final graph here, instead I construct an abstract syntax tree (AST). I used this solution because when parsing an expression, the context is still unknown, especially the function the expression belongs to. The AST would be something like:
    "which" : EXPR_BIN,
    "op" : "+",
    "lhs" : {
        "which" : EXPR_ID,
        "name" : "foobar"
    "rhs" : {
        "which" : EXPR_NUM,
        "val" : 42
when represented in JSON :)

Constructing the graph

Before parsing the code to compile, I initialize the libfirm library, and create a program representation inside the lib. Parsing shall complete this program with nodes, which describes the program in a logical way.

When the tokens EXTERN, DEF or MAIN are parsed, I create a new function prototype. MAIN has its own prototype (no parameters, returns an int), whereas EXTERN and DEF have custom prototypes depending on the function they declare (parameters and return value are double).

Moreover, DEF and MAIN define a function body, which is an expression. Actually, it's the previously constructed AST. So the tree is traversed to construct nodes. Here is the representation of the function graph for the first example's "transform" function:

transform: (a, b) -> a*a + 2*a*b + b*b

A function has basically three blocks: a start block (prototype, etc.), a body block, and an end block. In the start block we see that the function takes two parameters (nodes 75 and 76) and returns only one (node 72). Green nodes are just links between blocks, unfortunately you'll have to guess these links.

Note that the graph has been automatically optimized: the "2 * a" has been translated into "a + a" (node 84), which may be faster.

Generating the assembly file

Once the parsing is done, the program is transformed into another language (libfirm cannot directly generate executable file, it relies on assemblers to do this complex task). Here is the x86 assembly for "transform":
# -- Begin  transform
    .p2align 4,,15
.globl transform
    .type transform, @function
    .p2align 4,,7
    /* .L66: preds: none, freq: 1.000000 */
    pushl %ebp                       /* ia32_Push T[217:33]  */
    movl %esp, %ebp                  /* be_Copy Iu[220:36]  */
    fldl 8(%ebp)                     /* ia32_fld T[189:11]  */
    fld %st                          /* ia32_fpush ANY[229:45]  */
    fmul %st(1), %st                 /* ia32_fmul E[192:14]  */
    fxch %st(1)                      /* ia32_fxch ANY[230:46]  */
    fadd %st, %st                    /* ia32_fadd E[193:15]  */
    fldl 16(%ebp)                    /* ia32_fld T[194:16]  */
    fxch %st(1)                      /* ia32_fxch ANY[231:47]  */
    fmul %st(1), %st                 /* ia32_fmul E[196:18]  */
    faddp %st, %st(2)                /* ia32_faddp E[197:19]  */
    fmul %st, %st                    /* ia32_fmul E[198:20]  */
    faddp %st, %st(1)                /* ia32_faddp E[199:21]  */
    movl %ebp, %esp                  /* ia32_CopyEbpEsp Iu[224:40]  */
    popl %ebp                        /* ia32_PopEbp T[225:41]  */
    ret                              /* be_Return X[171:22]  */
    .size transform, .-transform
# -- End  transform
That's it. Have fun with Kaleidoscope!

Tuesday, October 25, 2011


Some words about the prototype we developed at the Devmania 2011 on the theme Pirate. I don't provide links to download or play it because I don't feel I own it, read further.

Edit: all the games are available on the Devmania's website!

We were a team of four. Our game was just named Pirates (as many other ones). It was a vertical side-scrolling game where the player controls a ship and has to connect islands appearing randomly at the top of the screen in order to build a peer-to-peer network between them. Some enemy ships try to cut this cable, but fortunately you can shoot them with your cannons before that happens.

The main drawback of the meeting was the lack of network connection. I couldn't have imagined a meeting of computer fans would provide no internet access. Well, now I can conceive it and therefore I will always put an Ethernet cable in my bag, and always attend to meetings with a switch. You should do the same as well!

That implied we had to work with the only tools we had on our laptops. Because I developed Magnetic Glues on it, I still had the installation file of the Löve2D framework, so we decided to use this tool. Moreover, the documentation can only be found on internet, so the prototype provided a valuable starting point despite of its complexity, which has been dramatically reduced.

Because we would not easily communicate our code (no version control), I thought it would be better to reduce the number of programmers and instead to produce some artworks. I am no artist, I can't produce great drawings, but the others have found them fun, so it's not a big waste of time. I mainly used Inkscape.

Unfortunately I can't explain what they did on code because I took no interest to see it. I could just say that a common error has been repeated: at the end they rushed to provide instruction screen and to package the files. Anyway, I was not very productive on the project and I don't feel being really part of it, that's why I didn't uploaded it. Maybe a group member will do.

Presentation of our game.

Tuesday, September 6, 2011

Embedded terminal in websites

I release today jquery.terminal, a jQuery plugin to embed a terminal in web pages. It has several base commands and can be customized in order to add application-specific commands. It has a powerful operator mechanism to group command together (allowing to pipe commands and more combinations). And it is very, very geek.

Try the demo online!

I first coded this plugin two years ago, when Aurélien asked me for a terminal to administrate a game he made, without having to build a web interface. I thought it was a good idea and build a system which requires JavaScript and PHP to execute commands on the server side. Last weekend I found this code, and because I dropped PHP in favor of "all in JavaScript!" for months, I wanted to rewrite it. Here is the result!

I really love the command flow, that the outputs of one command serve as additional arguments for another one. With real computer terminals, there are two input methods: arguments in the command line, or the standard input. I made here life much more easier: a single input way. Actually, it has been designed as a data stream with filters which can map values to other ones.

It is the first time I build an execution tree. Because the syntax is really easy, it was not so difficult to parse and collect information. It mainly relies on state machines with a really few number of states. Another solution would have been to use a BNF grammar and a JavaScript parser. I don't know if it exists one, and anyway I thought it was too heavy. However, the final code is more complex to read, thus to maintain, than if it used BNF.

Follow the development on GitHub.