Windows 8 is having to endure with a lot of headwind at the moment. The weather forecast doesn’t appear to improve in the near future either with prominent game developers and publishers taking to the barricades accusing Microsoft of building a closed ecosystem. I am forced to side with this opinion as I too see services the likes of Steam as playing an important role in the gaming world, which just happens to be an industry that cannot be sidelined.

What Microsoft is attempting to do is merge the PC and mobile markets. The Windows Marketplace is to be the only place where you can purchase Windows applications in the future starting now with Metro apps. This is what Apple, Google and Microsoft have been doing with mobile devices for some time now and it’s what we have all come to expect. The PC market is different, however. It has always been open, which has resulted in a diverse market allowing for third parties to build successful distribution and marketing networks.

You could argue that Microsoft is just doing something that Steam has been doing for a long time now but the difference is that Microsoft would own both the marketplace AND that operating system, which would eventually give it dominance over the whole Windows application distribution network. Currently there is no real alternative to Windows in the PC gaming world but I would expect to see Mac OS and Linux getting more popular if Microsoft does not notice the signals coming from the gaming industry and choose to once again open up the markets on the PC.

Just recently I read a great blog post by David Darling, the founder of Codemasters: In the blog post he talks about how traditional retail games are experiencing a downfall thanks to the increasing popularity of digital distribution.

I personally think of retail games as being relics of the past. It does not really make much sense to still keep distributing boxed games when the same game can be elegantly downloaded and updated over the air through a digital distribution channel.

The world is not all rainbows, however. One big issue with mixing digital distribution with boxed retail games is that resellers will not condone you selling your game for 10€ digitally while their selling the same game for 70€. The only way to get around this issue is to move to full digital distribution. This has the added benefit of minimizing piracy as the game can be tightly bound to the service you downloaded the game from.

Many players are, however, rightfully complaining about not being able to play the games offline. Having games tightly bound to the internet is a problem when games are bought from a retailer as we tend to expect that once we have the product we can use it anywhere because we physically own it. The truth is that we don’t actually own the product. Instead, the typical EULA actually states that we only have a license to use the product. We’re not, for instance, allowed to disassemble the product, which the owner is indeed permitted to do.

Digital distribution allows us to provide games as services, instead of selling them as standalone products. This means that for a service to work you have to be connected to the internet but you still have the same rights to use the product. It’s really straightforward; if you downloaded a client from the internet you are expected to have an internet connection so you’re able to connect to the server.

A game distributed digitally that is built using a client-server architecture has the added benefit of allowing you to play anywhere as long as you have the client installed and you are able to log in with your user information. Your save games can be backed up and your game can continue anywhere.

Another development we’re seeing in the gaming industry is the increasing popularity of free-to-play games. These are games that let you play for free but allow you to boost your gaming experience with real world money.

The nature of these games is that players are constantly rewarded with new content and the game can evolve according to their way of playing and their wishes can be incorporated into the product. Free-to-play games can quickly gain a large player basis and monetization is done by providing players valuable things to buy making their gaming experience more fun.

I am personally very excited about free-to-play games as it’s possible to start building the game together with your players and there is no need to work on the game for 5 years from start to finish and only then see if it’s actually something the players like. This is a typical problem with big movie-like retail games and recent news about Radical Entertainment practically closing its doors paints a clear picture of what can happen when the risk does not pay off:

I’m extremely happy to announce that Raccoon Interactive has finally released it’s first game on the Android platform. The game can be found on Google Play. If you are on your Android device, you can get the game by clicking this link.

There’s also a free Lite version of the game if you’d like to give it a try before buying! If you’re on your Android device, you can get the Lite version by clicking this link.

The game is all about a bubble named Frank in search of his cousins. Frank, being a bubble, likes to move upward which is why your task as the player is to rotate the world around Frank instead of controlling Frank himself.


screen_4 screen_5 screen_6


Bubbling Up is developed using Raccoon Interactive in-house technology and runs on Android 2.3 and newer devices. If you are having issues with the game, don’t hesitate to contact us through the e-mail address available through Google Play!


Bubbling Up Bubbling Up Lite
Get it on Google Play Get it on Google Play

With the Windows 8 Consumer Preview version out in the open and the Visual Studio 11 Beta also available, I’ve been putting some more effort into trying them both out. The first impression of the new Visual Studio 11 build was: Why doesn’t it work at all on Windows 8?!

The issues I encountered were odd to say the least. Visual Studio did install correctly, which was a good sign, but when I started it, I was greeted with a multitude of popups each telling me that some package could not be loaded. So I clicked away for quite some time to finally be greeted by the new default visual style of Visual Studio.

The next thing I wanted to do was start a new project so I pressed “New Project” and got the “Microsoft.VisualStudio.Dialogs.DialogInitializationException” exception. After some pondering and even submitting a bug report I found another already reported bug “”. It turns out that the dialog initialization failure has something to do with the packages no being loaded and following the presented workaround helped.

The trick is to change the negative number sign to “-“ (U+002D) from the additional settings of your regional settings. This issue appears to have a wider reach than just Visual Studio as I heard at TechDays Finland 2012 that the same fix has to be made when using Hyper-V.

Command passing is a handy way to execute pieces of code on specific threads. The idea is simple enough: Write a command into a buffer and that command then gets executed on some specific thread that monitors that command buffer. This paradigm is used in graphics APIs like DirectX and OpenGL, for instance. This allows you to split your data into working sets that are owned by a single thread and must maintain their state over a specified time period.

In the graphics engine I’m developing at Raccoon Interactive we’re using a couple of separate data sets that should change only at specific synchronization points but we don’t want the other threads to stall until that point is reached.

In our case the high-level graphics rendering thread has it’s own data set that controls the graphical representation of the scene, i.e. the scene graph. The scene graph consists of a node tree with leaves referencing scene primitives like meshes and lights. Meshes use low-level renderer primitives like vertex buffers, materials and textures that are bound to a low-level rendering API, such as OpenGL, through an abstraction layer. Operations against this low-level API are executed on a dedicated thread.

Each different level abstracts thread communication with the help of wrapper classes, so from the point of view of the developer you’re just calling methods on objects. This makes writing algorithms that operate on the scene graph or the graphics primitives much easier because you don’t need to lock your resources as they are modified only at pre-specified points in time.This also means that different sub-systems can continue to run in parallel even though, from a user’s point of view, everything seems to happen synchronously.

The command buffer can be found at the heart of all this communication. I’ve designed a custom implementation that simply consists of a block of memory into which function parameters and the function to call, itself, are stored. The reader of the command buffer then reads this block of memory and calls the associated function with the parameters found in the memory block reserved for the command.

The command buffer implementation in our engine is very C-like, I must admit, but it serves our purpose more than well. To write a command into the buffer you do the following.

    sizeof( RRInt_t )> cmd( &CmdMyCommand, pCommandBuffer );
cmd.Set<RRInt_t>( 1 );

The RRCommand class abstracts mapping and unmapping of the command buffer as well as writing the command arguments into the buffer. Its constructor takes a pointer to the command function and the command buffer into which the command should be written. We also pass the size required by the command arguments as a template argument, so the class knows how much space to allocate for this command.  All commands are written into a byte aligned memory buffer. Because assignment operators cause us trouble on systems that expect memory to be properly aligned we’re forced to copy the arguments into the command buffer by using a simple memory copy (aligning the arguments according to the worst case scenario would also work, but it would require significantly more memory).

To execute the commands we simply do the following.

pCommandBuffer->ExecuteCommand( true );

Passing true to the method makes it execute all the commands in the buffer. Otherwise, only a single command would be executed by a single call to ExecuteCommand.

The command itself is defined by the function CmdMyCommand, as is shown in the next code snippet.

void CmdMyCommand( RRCommandBuffer::RRInputCommand &rCommand )
    RRInt_t arg = rCommand.Get<RRInt_t>();

    // ...

As you can see, the command receives a single argument rCommand that provides access to the command arguments. The convention is that the command should read out all it’s arguments with consequent calls to Get at the start of the function. This is to avoid issues with reusing the temporary argument store from which the arguments are read.

That’s about it. This simple command sending mechanism has helped me a lot in creating abstraction layers to hide tedious thread communication. The same could most likely be implemented by using lambdas but I haven’t tried it because we’re currently tied to the lowest common denominator that is the Android NDK compiler, not to mention other even more limited compilers that we might have to support in the future.

The typical way to check for whether anything has changed in the view when you’re navigating away from a page is to bind a method to the change event of each input and to set a flag if this has happened. This flag is then checked when leaving the page and a notification is shown to the user if the flag is raised. This is all good but it does not take into account things like changing the order of inputs and changing values back to their original values. Also, creating dynamic elements makes this way of doing things a bit tricky because you have to add registering of the handlers to each method which adds new elements.

Just recently I had to implement this kind of checking. The implementation makes use of jQuery and has been tested with ASP.NET MVC but I don’t see any reason why it should not work with other platforms as well (at least with some minor changes). So without further ado, here’s the code.

The variable contains all the values of the new as found by the previous call 
to storeViewValues. */
var viewValues = []; /** Set this to true to enable dirty-checking for the current page. This is also set when registering change checking for the view. */ var enableDirtyCheck = false; /** Collects all view values and returns the array. */ function collectViewValues() { var newViewValues = []; // Store inputs. $('input:enabled').each(function () { if ($(this).attr('id') != null && $(this).attr('id') != '') { // Ignore the ones that have '.' in their IDs. These are not part
// of our input values, since they are not allowed by // MVC when generating inputs.
var id = $(this).attr('id'); if (id.indexOf('.') < 0) newViewValues.push({ selector: 'input', id: $(this).attr('id'),
value: $(
this).val() }); } }); // Store select values. $('select:enabled').each(function () { var that = this; if ($(this).attr('id') != null && $(this).attr('id') != '') { $(this).find('option:selected').each(function () { // Ignore the ones that have '.' in their IDs. These are not
// part of our input values, since they are not allowed by // MVC when generating inputs.
var id = $(that).attr('id'); if (id.indexOf('.') < 0) newViewValues.push({ selector: 'select', id:
'id'), value: $(this).val() }); }); } }); return newViewValues; } /** Stores the current view input and select values to viewValues. */ function storeViewValues() { viewValues = collectViewValues(); } /** Checks for changes in the view. You must call storeViewValues when the view is fully loaded before calling
this method. */
function checkChanges() { var hasChanged = false; if (enableDirtyCheck) { // Collect current input values. var newViewValues = collectViewValues(); if (newViewValues.length != viewValues.length) { hasChanged = true; } else { // Check values. for (var i = 0; i < newViewValues.length; i++) { var newViewValue = newViewValues[i]; var viewValue = viewValues[i]; if ( != { hasChanged = true; break; } if (newViewValue.selector != viewValue.selector) { hasChanged = true; break; } if (newViewValue.value != viewValue.value) { hasChanged = true; break; } } } } if (hasChanged) return "The page contains modifications!"; return null; } /** Call this method at the end of the current view if you wish to enable dirty
checking for the view. */
function registerDirtyChecking() { $(document).ready(function () { // Default to checking whether the view is dirty or not. enableDirtyCheck = true; // Store existing values. storeViewValues(); window.onbeforeunload = function (e) { var ret = checkChanges(); if (ret != null) { e.returnValue = ret; return ret; } }; $('form').submit(function () { enableDirtyCheck = false; }); }); }

The implementation takes into account that Internet Explorer shows the message even when you return null from the onbeforeunload handler, which actually works on Chrome, for instance. I also added setting of enableDirtyCheck to false when submitting the form, to ignore the message when we’re actually submitting the changes.

To use the code, simply include the bit of code in your view and call registerDirtyChecking at the end of the view. This forces the document ready handler to be added as the last one to be called (if other document ready handlers don’t add new document ready handlers themselves, that is).

When working with a large code base, finding reasons for bizarre bugs can often be like finding a needle in a hay stack. Finding out why an object gets corrupted without no apparent reason can be quite daunting, especially when it seems to happen randomly and totally out of context.


Take the following scenario as an example. You have defined the a class that contains an array of characters that is 256 characters long. You now implement a method for filling this buffer with a string passed as an argument. At this point you mistakenly expect the buffer to be 256 characters long.

At some point you notice that you require another character buffer and you add that after the previous one in the class definition. You now figure that you don’t need the 256 characters that the first member can hold and you shorten that to 128 to conserve space. At this point you should start thinking that you also have to modify the method defined above to safeguard against buffer overflow. It so happens, however, that in this not so perfect world this does not cross your mind.

Buffer overflow is one of the most frequent sources for errors in a piece of software and often one of the most difficult ones to detect, especially when data is read from an outside source. Many mass copy functions provided by the C run-time provide versions that have boundary checking (defined with the _s suffix) but they can not guard against hard coded buffer lengths that at some point get changed.

Finding the bug

Getting back to the scenario, you’re now wondering why does the second string get modified with data that makes no sense at all. Luckily, Visual Studio provides you with a tool to help you with finding just these kinds of errors. It’s called data breakpoints.

To add a data breakpoint, you first run your application in debug mode or attach to it in the usual way, and then go to Debug, select New Breakpoint and New Data Breakpoint. In the popup that opens, you can type in the memory address and the amount of bytes you wish to monitor. You can also use an expression here, but it’s often difficult to come up with an expression for data in an object allocated on the heap when not in the context of a certain stack frame.


There are a couple of things to note about data breakpoints, however. First of all, Visual Studio supports a maximum of four data breakpoints at any given time. Another important thing to notice is that some C run-time functions modify memory in kernel space which does not trigger the data breakpoint. For instance, calling ReadFile on a buffer that is monitored by a data breakpoint will not trigger the breakpoint.

The application will now break at the address you specified it to. Often you might immediately spot the issue but the very least this feature can do is point you in the right direction in search for the real reason why the memory gets inadvertently modified.


Data breakpoints are a great feature, especially when doing a lot of low level operations where multiple locations modify the same data. With the exception of some special cases, like kernel memory modification, you can use it whenever you need to check when memory at a certain location gets changed on purpose or inadvertently.

Just recently I bumped into a very nasty bug that I had been unfortunate enough to conjure. Alignment of memory has never been my primary concern when working on the PC. As a typical C++ programmer you often don’t have to think about such things. On the PC this is usually “almost never” (when not optimizing, that is) and in a managed environment this truly should become “never”. On ARM, however, “never” becomes “almost never” again.

Having your memory aligned means storing values of different sizes at addresses that are multiples of a certain number. The typical CPU gives you bonus when you have your memory properly aligned but does not kick you when it’s not. ARM, on the other hand, is not that nice.

Issues arise when you try to write a memory block at a random location by interpreting that location has a value of a given type, as is done in the following example.

void *pMyBuffer = malloc( sizeof( int ) * 2 );
*(int *)( (char *)pMyBuffer + 2 ) = 10;

Malloc returns memory aligned at the worst case boundary so writing at the beginning of the memory block would be ok. Writing an int at a two byte offset means you’re writing at a memory address that’s not a multiple of four. Optimized code on ARM does not like this.

The easiest way around this is writing byte by byte when absolutely necessary. When compiling with GCC, it appears that disabling compiler optimizations might also get you around this issue. I think we can all agree that that’s not the best solution to this problem, however.

The problem I recently ran into was related to this issue but it occurred in a managed environment! I was running Mono on Android and was using a Vector3 structure for which I had explicitly specified the memory alignment by telling the run-time to pack the structure at byte boundaries. The issue arose when I was accessing and instance of the structure embedded in another class.

I was initially puzzled, since I had tested the structure with another class and all had worked fine. After rigorous testing, and fixing the issue by specifying the structure at a different location within the class, I finally figured that it must be a memory alignment issue.

What I find somewhat interesting is that I hadn’t specified the packing of the containing class and thus the run-time should still have been able to align my memory correctly in the containing class while still respecting the layout of my Vector3 instance. This is actually something that led me into believing that there is a bug in Mono. Feel free to comment on this post if anyone has any insight on this.

The lesson that we all should learn from this? Sometimes it helps to know what happens under the hood, even in a managed environment.

The recently ratified C++11 language specification provides a range of cool new features. Many of which have been part of other programming languages for some time now. One such new feature that I value a lot is the concept of lambdas.

Lambdas are great in many ways. They enable you to create callbacks that are called for specific items, for instance, or you can implement events with them, as is the case in this blog post. The following is an example of an event implemented in C++11 using lambdas.

RREvent1Arg<int> e;
e += [](int i) { printf( "%d", i ); };

e( 100 );

This looks exactly like C# style events and makes for a very neat way of creating notifications that the user can request to be triggered at pre-specified points. The method introduced here was not possible in previous versions of C++ but with the introduction of lambdas we now have the power to define these kinds of constructs in a developer friendly manner.

The lambda is actually a nameless function body wrapped with a nameless type. This type can also store captured stack variable values if you so choose. This is possible by specifying the captured fields between the [ ] brackets. You also have the option to specify default capture behavior. If you wish to capture values you use by reference you should just write the letter ‘&’ between the brackets and if you wish to capture by value, you should write the letter ‘=’. You can also prefix the name of a single field with the letter ‘&’ to capture it by reference. You can use all the fields you have captured, either explicitly or implicitly, within the function body. The following shows a few examples.

int stackValue = 200;
RREvent1Arg<int> e;
e += [=](int i) { printf( "%d\n", i + stackValue ); };
e += [&](int i) { printf( "%d\n", i + stackValue ); stackValue += 100; };
e += [&stackValue](int i) { printf( "%d\n", i + stackValue ); };
e += [stackValue](int i) { printf( "%d\n", i + stackValue ); };

e( 100 );

// Prints:
// 300
// 300
// 400
// 300

Because lambdas are defined as nameless types, you can’t directly define the type of a variable that will contain a lambda. This is one major reason why C++11 introduces the “auto” keyword. When using the auto keyword, as is done in the following example, the compiler actually deduces the type of the variable without you having to explicitly specify it.

auto l = [](int i) { printf( "%d\n", i ); };

What about templates then? We’d like to define a template called RREvent1Arg that does just what we’ve seen at the beginning of this post. We’d like to be able to create instances of references to the lambdas that we’re adding to the event. This is where std::function comes in. This template class allows for defining a function prototype that the lambda function must comply with and then pointing to the given function with the given prototype. We’re also storing an array of these constructs thus allowing the user to register multiple event handlers.

Cutting a long story short, the magic behind the event implementation above goes as follows.

template<typename T1>
class RREvent1Arg
    typedef std::function<void (T1)> Func;

    void Call( T1 arg )
        for( auto i = m_handlers.begin(); i != m_handlers.end(); i++ )
            (*i)( arg );

    void operator ()(T1 arg)
        Call( arg );

    RREvent1Arg& operator += ( Func f )
        m_handlers.push_back( f );
        return *this;

    RREvent1Arg& operator -= ( Func f )
        for( auto i = m_handlers.begin(); i != m_handlers.end(); i++ )
            if ( (*i).target<void (T1)>() ==<void (T1)>() )
                m_handlers.erase( i );

        return *this;

    vector<Func> m_handlers;

Note that you have to define a new class for each parameter count. This is not pretty, but if you’re working with Visual Studio 2010 it’s a must because VS2010 does not support variadic templates. So you’ll have to wait for VS11 to be able to make the code prettier.

Of course lambdas are not the only way of implementing these kinds of events. We’ve actually been using this kind of event structure since the dawn of time in our game engine at Raccoon Interactive. We’re actually not able to implement events with lambdas at this point in time due to limitations in compilers on some of the major platforms we’re targeting (Android being one example) but instead we simply store pointers to this call methods.

To conclude all this, lambdas open up a range of new ways in which we can be more productive bringing C++ closer to languages like C# in those terms. The updated STL actually uses lambdas a lot allowing you to write much simpler iteration code, for instance. Events are just one example of how C++11 is bringing C++ back to the “mainstream” in not just the embedded world but for desktop applications as well.

C++ is a very powerful language. Well written native C++ code can perform much better than managed languages like C# and Java due to optimizations that the managed systems are not able to perform during run-time compilation (if this is done at all, that is). This is great for developers who work on gaming technology for instance. For people concerned with game logic, performance isn’t necessarily priority number one, but productivity and the ability to express oneself without too much head banging.

When implementing game logic most of the code expresses certain operations that need to be performed. Usually this means initiating an operation and waiting. The actual execution of the operation usually involves starting and updating animations, allocating and loading resources as well as doing rendering. These operations should be implemented in optimized code to get the most out of the platform but the control logic can be implemented in a language that facilitates higher productivity.

Because of this, game engine core functionality is usually implemented in C/C++ and a scripting layer is built to utilize those platforms facilities. Some major game engines implement their own scripting languages with Unreal Engine being the most notable example. These days there are a few options worth considering before starting down that path, however. A few examples include LUA, AngelScript and C# on Mono.

Mono is the system that I’m going to discuss in this post. There are many benefits to using Mono. First of all Mono provides basically all of the benefits of .NET/Java code. The most notable of these being productivity. Hardly anyone can argue that writing C# code with a good IDE like Visual Studio 2010 is not productive and developers can benefit from the same tools that business software developers get to use.

The second major benefit over many other scripting languages is performance. Yes, I did talk about game logic not having to be as optimized as platform operations but it doesn’t hurt if it is. Mono actually implements a JIT compilation process similar to the Microsoft .NET framework. At run-time it compiles the IL code generated by the Visual Studio compiler and emits machine code executable by the processor. This makes it very fast but not necessarily quite as fast as C/C++, mind you.

As a side note, for the doubtful ones, the performance of C++ stems from a couple of major aspects unique to native programming. First of all, you have full control over the memory you’re accessing. Because modern processors rely heavily on preloading blocks of memory into multiple on-chip caches, knowing what memory you’re accessing becomes very important. Also, modern processor architectures exhibit NUMA (non-uniform memory architecture), which means having your memory close to where it’s being used (processor/core wise) helps you get an additional boost. There are other factors as well but this article is not about C++ performance.

Going back to Mono, multi-platform support is also one of the major benefits of using Mono. Mono is written in such a way that it’s possible to compile it for most popular platforms and since it’s open source and if some platform is not supported, you can always switch into do-it-yourself gear. Perhaps the platforms of most interest to my readers are Windows, Linux, Android, XBox 360, PlayStation 3, Nintendo Wii and iPhone. If your favorite platform is not listed it doesn’t mean it’s not supported Smile


I’d like to explain to you, the reader, how to get started with embedding Mono to gain productivity for logic and still retain the effectiveness of native programming.

Compiling and initializing

Personally the most interesting scenario for using Mono is embedding it in a C/C++. This allows for building an interface on top of a C/C++ application to enable higher productivity while moving optimized code to the native host. In this blog post I will be using Mono on Windows.

To embed Mono in your C/C++ application you must first generate an import library using the module definition file found here. By the way, I see they have finally added mono_domain_create_appdomain, that we’ll be using later on, to the definition Winking smile


The exact command for generating the import library is as follows:

lib /nologo /def:mono.def /out:mono.lib /machine:x86

The command generates a mono.lib file that your application can link against. Once the import library is generated you will have to call mono_jit_init to initialize the Mono run-time and mono_jit_cleanup to cleanup. The function takes as its only parameter an assembly that will be loaded into the created application domain. It’s also possible to call mono_jit_init_version, which allows for initializing a specific version of the Mono run-time.

My personal recommendation is not to use this application domain for basically anything. This is because dumping the main application domain, i.e. ripping down the run-time, and then re-initializing it crashed the application. I’m not sure if this is fixed yet, however. A better option is to create a dummy assembly that will be loaded into this dummy application domain. This domain will only function as a default application domain that enables Mono to function correctly.

You can create this new application domain by calling mono_domain_create_appdomain (before calling this function ensure that the main dummy application domain is activated by calling mono_domain_set). This will create a new isolated virtual process execution environment with it’s own heap and stack. This means that application domains can be dropped at any time thus releasing assemblies loaded into the application domain for modifications.

As I already mentioned an application domain defines an isolated virtual process execution environment that has it’s own memory space. This means that you can load assemblies (managed DLL files) into an application domain and run code in it, letting it allocate objects as it wishes, and then drop the application domain, thus freeing all the allocated memory and the references to the loaded assemblies it owns.

Creating a new application domain for running your script code, for instance, is important, because of the fact that you can unload scripting components without having to restart the application.  You might ask why not just unload the assemblies that are not required anymore? The answer is short: because you can’t. The .NET framework, and thus Mono also, forbids unloading assemblies. This is due to the complicated interdependency of objects and their types. This might result in missing dependencies in the middle of application execution, which is hardly a good thing.

This brings us to what we do at Raccoon Interactive. We use an application domain to host the game script environment that is created to wrap an existing game environment inside our editor. If we make any changes to the built script assemblies we simply stop the game execution within the editor at which point the editor restores the state of the game to what it was before jumping into the game and dumps the application domain. After this we can simply jump into the game again and see the new modified scripts in action.

Loading and running code

To actually run your code in the application domain you have just defined you must first load the assembly into the domain. To load the assembly into the domain do the following.

MonoAssembly *pAssembly = mono_domain_assembly_open( pDomain,
    “MyAssembly.dll” );
if ( pAssembly == NULL )
    return NULL;
MonoImage *pImage = mono_assembly_get_image( pAssembly );

The mono_assembly_get_image function gets the assembly image required by many Mono operation, like reflection (getting type information).

Assemblies can be reflected for the type information they contain. You can also create objects once you know their types and call arbitrary methods to get your game code running. To get this type information you can call a couple of functions, my favorite being mono_class_from_name. This function takes the image of the assembly, the namespace of the class and the name of the actual class as parameters. The following is an example of getting a type.

MonoClass *pClass = mono_class_from_name( pImage,
            "MyNamespace", "MyClass" );

Great! Now we have loaded an assembly and we have a pointer to the class that we’re interested in. Mono also allows creating objects using the type information we have just requested and invoking methods on that object (or invoking static methods, for that matter).

To create an object and invoke a method on it, do the following.

// Get the constructor of the class.
MonoMethod *pConstructorMethod = mono_class_get_method_from_name_flags( pClass,

// Create a new instance of the class.
MonoObject *pObject = mono_object_new( pDomain, pClass );

// Acquire a GC handle if the object will not be rooted in the CLR universe.
guint32 GCHandle = mono_gchandle_new( pObject, false );

// Invoke the constructor.
MonoObject *pException = NULL;
mono_runtime_invoke( pConstructorMethod, pObject, NULL, &pException );

The most important thing to note here is that the object is actually not initialized straight away as we create a new instance of the class. We first have to get the constructor with the quite peculiar name and manually invoke it. Here are are using a default constructor that does not take any arguments.

Also, note that we are acquiring a GC handle to the object. If you wish to use the object from native code and not have it rooted anywhere in the CLR universe (the Common Language Run-time universe) you must acquire a GC handle so that the garbage collector will not steal the object from you.

Once you are finished running your code you should call mono_domain_unload to unload the application domain that houses your code. You can also call this function to release a domain without actually quitting the application.

Next, I’ll cover manipulating fields of objects and going deeper into invoking methods and parameter passing.

Manipulating fields and invoking methods

Often, when working with script objects, you are interested in directly modifying an object in the script universe. You might, for instance have a native pointer in the object that defines a native resource to which the object is bound and you wish to initialize that directly. Mono provides a set of functions for doing just this.

When using Mono reflection, you must always get information about the certain type of primitive. For instance, to invoke methods, you must first get method information, as I showed in the previous section. To manipulate fields you must do the following.

MonoClassField *pField = mono_class_get_field_from_name( pClass,
    "_myField" );

You pass the method the class information and the name of the field. Simple, eh? Setting field values is a bit more complicated and requires some information about argument passing in the Mono run-time. Documentation on conventions is pretty much non-existent (as is all but simple initialization).

To se the value of a field of the type float you must do the following.

gfloat value = 1.0f;
gfloat *pValue[1];
pValue[0] = &value;
MonoException *pEx = NULL; mono_field_set_value( pField, pObject, (
void **)pValue, &pEx );

Not that pretty, is it? mono_field_set_value takes as parameters the field that you are setting, the object that contains the field, a pointer to a pointer to data and a pointer to a pointer to an exception. The idea behind the third parameter is that it collects the argument data from a pointer array that points to the data. You should always use type names prefixed with the character g (their part of GLIB). This will assure that the internal representation correctly maps to your data.

To set a field to reference a Mono object, do the following.

MonoObject *pObject = ...
gpointer pValue[1];
pValue[0] = pObject;

mono_field_set_value( pField, pObject, (void **)pValue, &pEx );

As before, we are passing an array of pointers where we now point to a single Mono object. To set properties, just use the mono_property_set_value function on a MonoProperty instance returned by mono_class_get_property_from_name, for instance.


In this article we covered first of all why using a managed layer on top of your native code is sometimes quite useful. On the technical side we covered initializing Mono, creating objects, manipulating them and calling methods on them. There are a lot of specifics that I have not covered and will get into them in future posts. These include boxing and registering internal call methods that enable managed code to call back into native code.

As always, if there are errors in this article or something does not work, please don’t hesitate to drop a comment and I’ll try to answer your questions to the best of my abilities!

Until next time!