Alois Kraus

blog

  Home  |   Contact  |   Syndication    |   Login
  106 Posts | 8 Stories | 293 Comments | 162 Trackbacks

News



Article Categories

Archives

Post Categories

Image Galleries

Programming

How can you use the word efficient memory usage and mention in the same headline .NET? We all know that C++ is much more efficient with regards to memory consumption. Yes I agree that if you really love your memory you should think twice if .NET is the right choice for you. There have been reasons why  Windows Vista has not a single managed executable executed while starting up. Ok the Event Viewer is managed which explains why it is starting so slow. First of all you need to know what things cost. The following table shows you how much memory is allocated for some common object types:

 

Type Size in Bytes (32-bit)
new object() 12
new string('\0'); 20
new DummyStruct(); 4

What perhaps is surprising that each managed class object  consumes at least 12 bytes of memory. If you want to allocate a huge number of objects you are perhaps better off with a struct value type.

The program used to get the numbers was this one:

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading;

using System.Runtime.InteropServices;

 

namespace MemoryAllocation

{

 

    [StructLayout(LayoutKind.Sequential,Pack=1)]

    struct DummyStruct

    {

        public int a;

        public int b;

    }

 

    class Program

    {

        static List<T> Allocate<T>(int allocations, Func<T> allocator)

        {

            List<T> memory = new List<T>();

            for (int i = 0; i < allocations; i++)

            {

                memory.Add( allocator() ); // call function that creates a new object

            }

 

            return memory;

        }

 

        static char [] empty = new char[] { '\0' };  // input for string ctor to create an empty string

 

        static void Main(string[] args)

        {

            var before = GC.GetTotalMemory(true);

 

            const int Allocations = 1000 * 1000;

 

            // allocate memory and do not release it

            var mem = Allocate(Allocations,

                //() => new object()

                //() => new DummyStruct()

                () => new string(empty)

                );

 

            GC.Collect();

            GC.Collect();

            var after = GC.GetTotalMemory(true);

 

            GC.KeepAlive(mem);

 

            // get memory allocated by one object excluding the 4 bytes which are used for the object

            // reference in the array

            Console.WriteLine("One object consumes about {0} bytes", (after - before) / Allocations);

        }

    }

}

One thing to note is that you need to subtract from the output 4 bytes for reference types (object and string) because they are stored in an array and we do not want to count the array reference as memory consumption also. As I said in my previous post "Where Did My Memory Go" every small (< 85000 bytes) managed object allocation will eat up your physical memory because the garbage collector will traverse the managed heaps from time to time to remove dead objects and to compact the heaps. That has the effect that your objects although you will never use them will stay always hot in the memory which prevents them to go into the page file. You can of course force the OS to swap all your memory out to the page file by calling SetProcessWorkingSetSize(GetCurrentProcess(),-1,-1) but that has severe performance drawbacks when you access the swapped out memory which causes hard page faults. Windows Forms application actually do this to save memory. That is the reason why the Working Set drop to some MB when you minimize a managed application. The golden rule is to use efficient data structure to consume as little memory as possible. If you want to optimize your memory consumption you need a managed memory profiler. The ones I found most useful are

 

.NET Memory Profiler from SCITech

  • It is cheap 179€/license (taken at 13.12.2008)
  • The fastest profiler I have used so far.
  • Full 64-bit support.
  • It does support allocation stacks (which functions did lead to the object allocation).
  • View content of allocated objects.
  • It can take Snapshots of your process and compare it to another snapshot. This way you can find memory leaks quite easily.
  • Object tagging to find out which objects are new since the last snapshot.
  • Nice and fast filtering capabilities.

Of course there are also some gotchas

  • The extended profiling mode is not as stable as I would have wished. It works for most applications but can crash on bigger applications.
  • The object list is not very easy to navigate to find your biggest memory consumers.

 

The .NET Memory Profiler is easy to use and definitely worth its money.

 

YourKit Profiler for .NET

  • It can be enabled during application runtime which is a quite unique feature I have found nowhere else.
  • It is both a performance and memory profiler.
  • Full 64-bit support.
  • Class List view is easy to navigate.
  • Class Tree view is very cool to find out more about which objects contain all the others on a namespace level.
  • Memory Analysis can find duplicate strings and other waste memory anti patterns which can otherwise only be found by watching each string ...

That sounds very impressive and it is. But it has also some quite severe limitations

  • It has no allocation stack support. That makes it very hard to find out who has allocated your object.
  • Static class members cannot be tracked down to the class that holds the reference to them. They show up as object roots with no connection to anywhere.
  • Stack local instances are flagged but if you e.g. allocate 200 MB inside a function you will not be able to navigate to the class that allocated the object.
  • The fast profiling mode is much less stable and crashes quite often
  • Opening a snapshot is very slow.
  • The performance profiler (sampling, tracing) does not show me the bottlenecks where thread sleeps are involved in the way I would expect them. That can lead to the wrong direction. The new Ants Profiler 4.0  or Intels VTune Performance profiler are much better suited for that job.
  • Not so cheap 389 €/license (taken at 13.12.2008).

YourKit is the leading Java profiler company which also has a .NET profiler in their portfolio. No profiler is perfect and in fact both complement each other and I would recommend to use both (SciTech and YourKit)  to get the best possible overview how your memory in your application is distributed. All profilers can be downloaded from the software vendors for free with a 14-day trial license to try them out. I recommend to do so to find out which profiler suits your needs best. There are some other profilers also out there like AQTime and the Ants Memory Profiler. AQTime seems to be able to profile .NET and C++ for performance and memory which makes it very interesting. But so far I found not enough time check because it is not easy to use. The current Ants Memory Profiler is not usable and I cannot recommend it at all. But they have a very good performance profiler which is really worth its money. The only thing I really do not understand about the Ants profilers is that it is NOT possible to launch an application with command line arguments from a batch file. That is ok for GUI applications but if your application under test spawns child processes you need to be able to call the profiler from the command line.

Equipped with a profiler we can chase our memory now.

I recommend to look at first for

  • Duplicate strings - The objects that allocated them will most likely have several instances around. Consider to make them static to save memory.
  • *Cache* in the type name. It is surprising how much memory with caches is lost. Bigger applications seem to have their own cache in each architectural layer which should be questioned. If the cache itself is ok look how many instances of your cache exist. A sane rule is that a cache should exist only once. If you find more than one cache instance it is very likely that the cached data may be the same but it is not shared.
  • XmlDocuments have their own XML DOM tree representation which consumes quite a lot of memory (x3-x5 times more than the plain xml file). The profilers have a hard time to resolve from an XmlNode to the actual object that holds a reference to them so it can be a bit tricky to find out.
  • XmlReaderSettings are a fine source of memory leaks if you store them as member inside your class. When you choose to validate your XML document you will attach the reader settings class with the validation error callbacks to the just read XmlDocument. In effect you reference from the XmlReaderSettings instance your XML DOM tree even when you do not need it ever again!
  • Huge number of objects of the same type and check who has allocated them.

 

When you have found an inefficiency it is time to fix it. Then you need to measure memory consumption again. Here comes the hard part. It is very difficult to check if a memory optimization did actually save memory. A simple look at Working Set, Private Bytes, GC Heap does not work since the GC heaps are allocated in chunks (16 MB if I remember correctly). These numbers tell you only the peak memory consumption during startup of your application. But since then half of your heaps might be empty and you can get the impression that nothing has changed after your patch. The easiest way to check if an optimization did actually work is to look at the GC.GetTotalMemory value from time to time or you can use the memory profilers overview pages which are also helpful.

posted on Friday, December 12, 2008 1:52 PM

Feedback

# re: Efficient Memory Usage With .NET 12/13/2008 9:27 AM Robert Mircea
You forgot JetBrain's dotTrace profiler - one of the best.

# re: Efficient Memory Usage With .NET 12/14/2008 7:00 AM Alois Kraus
I did not look at it yet. But you can bet I will. If somebody knows what is the best tool to find C++ memory leaks in a Managed C++ application I would appreciate it.

Yours,
Alois Kraus


# re: Efficient Memory Usage With .NET 12/15/2008 5:31 AM BCoelho2000
Great post about memory in the .NET platform and lots of cool debugging tools.

Speaking about tools: you can also look at SOS http://msdn.microsoft.com/en-us/library/bb190764(VS.80).aspx

Best regards,
BCoelho2000

# re: Efficient Memory Usage With .NET 12/15/2008 5:09 PM Alois Kraus
I use Windbg quite a lot but for memory issues I still like commercial tools much better since it is much (I mean magnitudes) easeir to track down your "bad" objects as with ClrProfiler. SOS which is able to dump the managed stack in a format that ClrProfiler does understand so at least some overview GUI exists.


Yours,
Alois Kraus


# Memory Profiling with ANTS Memory Profiler 5 6/18/2009 6:21 AM Stephen
A totally new version ANTS Memory Profiler (V5.0) has now been released which has been completely redesigned so it now offers incredibly fast profiling with a host of new features to easily identify the cause of memory leaks quickly. It can be downloaded from:

http://www.red-gate.com/products/ants_memory_profiler/index.htm

Post A Comment
Title:
Name:
Email:
Comment:
Verification: