Blog Stats
  • Posts - 44
  • Articles - 0
  • Comments - 68
  • Trackbacks - 0

 

Friday, January 3, 2014

C# via Java: Arrays


The one primitive type that hasn't been covered is the array. An array contains a fixed number of items, and each item is a value of the array's element type. The array elements are individually indexed starting from zero.

In comparison to the other primitive types, which are all value types, arrays are reference types. That means that variables of array types don't contain the array directly where the variable is defined, but instead contains a reference to the array. In code, array types are specified using square brackets after the type of the array elements. For example, an array of double values is a double[], an array of ushorts is ushort[], and an array of arrays of integers (each element of the top-level array is a reference to another array) is int[][].

Array elements

As arrays are primitives, the runtime needs to provide built-in operations to create arrays of a fixed size, read and write individual array elements, and get the length of an array. The Java and .NET runtimes provides specific instructions to read and write elements of each primitive type:

Operation Java .NET
Create a new array. newarray newarr
Read integer value from int[]. iaload ldelem.i4
Read integer value from ushort[]. N/A ldelem.u2
Store double value in double[]. dastore stelem.r8
Read reference type value from an
array of some reference type.
aaload ldelem.ref
Get the length of an array arraylength ldlen

What does this mean? Well, every array of a primitive value type is a completely separate type, with its own type information created at runtime. Just as an int cannot be used in place of a ushort, an int[] cannot be used in place of a ushort[] - as different instructions are needed to access arrays of different element types. You can't use a ldelem.u1 to read an element from a Object[].

However, the same instructions are used to read and write elements of arrays containing reference types - aaload (Java) or ldelem.ref (.NET) is used to read an element from a String[], Object[], int[][] (loading an element of type int[]), or any other reference type. However, as we're dealing just with primitive types in this post, we'll ignore this for now, and consider all arrays of a reference element type to be equivalent.

So, a byte[] is a separate type to a short[], which is a separate type to a <ref type>[]. And this is indeed the case in Java. The type descriptors for each type of array is created when the runtime is loaded, inherits directly from Object, and they have special built-in operations and syntax in the Java language that map directly onto bytecode instructions. They do not implement any interfaces, they have no methods, and are not directly convertible to other types of arrays.

In particular, arrays are completely separate to the collection and list interfaces in Java's class libraries - explicit wrapper methods are needed to convert between the two.

But this is not the case in .NET. In the CLR, all arrays inherit from System.Array (which is declared as abstract). This type defines several methods, and implements IList. So all the array types created by the CLR inherit these methods and interface implementations. This means that an instance of any array can be treated as a collection type, and can be used wherever an IEnumerable, ICollection, or IList is required. If the array element type is a value type, the array performs the necessary operations to box and unbox the elements in the array when the array is accessed using those interfaces.

SZArrayHelper

But it does not stop there. Starting in .NET 2, array types are also provided with their own implementations of the generic IEnumerable<T>, ICollection<T>, and IList<T> types. And you can find this implementation in mscorlib.dll. Open it up in a decompiler, and navigate to System.SZArrayHelper.

If you have a look around this class, you'll notice some strange calls looking like this:

T[] arr = JitHelpers.UnsafeCast<T[]>(this);

This call to UnsafeCast casts the 'this' reference to T[], where T is the method's generic type parameter. But these methods are delared on the type SZArrayHelper, which is not an array.

What's actually going on is that, at runtime, the IL of the methods in SZArrayHelper is grafted onto each array type as it is created by the CLR. These methods are used to implement the generic IList<T> interface, with T instantiated to the element type of the array.

These interface implementations are applied to each array type, allowing the primitive arrays to be used where a generic collection type defined in the class libraries is expected. Similar to the primitive types in the previous post, the CLR provides extra functionality to arrays, on top of the same built-in operations provided by the runtime. This functionality is partly inherited from the System.Array type, and partly patched in to array types at runtime. In Java, the arrays only have the operations provided by built-in instructions.

In the next post, we'll continue looking at primitives provided by the runtime, and the differences in the language-level operators provided by C# and Java.

Tuesday, November 26, 2013

C# via Java: Primitive types


So, what is a primitive type? According to the Incompleteness Theorem, there will always be things in any mathematical system, and therefore any computational system, that cannot be defined using the rules of that system. These rules form the axioms of that system.

For Java and C#, the axioms are the rules of the language and runtime, as defined in the respective specifications, and those rules cannot be inferred from within the language itself. They simply exist as a given.

So, what rules are the axioms of Java and C#? There are several possibilities, given the wide scope of both langauges. For the purposes of this post, I'm going to concentrate on the type systems of both languages, and use the primitive types as the axioms. So what are those primitive types?

  • booleans
  • integers
  • floats
  • arrays of primitives

And that's it. There are several things to note from this definition:

  • Arrays are defined recursively, so you can have an array of arrays of integers.
  • Arrays are a reference type, everything else is a value type.
  • Objects are not a primitive, as an object can be defined using arrays of primitives. As arrays are a reference type, this gives objects, defined using arrays, the semantics of a reference type.
  • Characters are not a primitive either, as those can be defined using integers.
  • Strings can be defined as an array of integers.

Also note that this is not a formal definition - I'm using this definition to learn more about Java and C#, and how they use these primitives to define the rest of the language, not to define and analyse C# and Java using formal type theory or strict mathematics.

The primitive value types

In this post, we'll be starting off with the primitive value types. What are the primitive value types in Java and C#?

Type Java C#
boolean bool boolean
1-byte signed integer byte sbyte
1-byte unsigned integer byte
2-byte signed integer short short
2-byte unsigned integer ushort
4-byte signed integer int int
4-byte unsigned integer uint
8-byte signed integer long long
8-byte unsigned integer ulong
4-byte float float float
8-byte float double double

Within the runtime, these values all have a predefined representation - for the numbers, simply the byte representation of that number, and for boolean values, a 1-byte value containing zero for false, and non-zero for true. As you can see, C# provides signed and unsigned versions of all the various lengths of integers - 1, 2, 4, and 8 bytes. The c# byte is defined as unsigned, whereas Java only provides signed versions, which means Java's byte is signed.

When programming using these types, you need to be able to perform operations on them, such as arithmetic operations or comparisons. Due to the Incompleteness Theorem, these operations cannot be defined using code written in the language itself - these operations are defined outside Java or C#. And so the CLR and JVM can perform mathematical operations and comparisons between instances of the primitive value types without using any external libraries. To accomplish this, there are special commands in IL and Java bytecode to perform these built-in operations.

A selection of these commands are:

Operation Java bytecode IL
Add two 4-byte integers iadd add
Multiple two 8-byte floats dmul mul
Branch if equal if_icmpeq beq
Load a constant 8-byte integer ldc2_w ldc.i8

To access these built-in runtime instructions from Java or C#, the language has special syntax that compiles to these instructions, primarily the mathematical operators + - * / < > and ==. So, for example, the following expression:

int i = 10 + 20;

compiles to the following IL:

ldc.i4 10
ldc.i4 20
add
stloc.0

and the following Java bytecode:

ldc 10
ldc 20
iadd
istore_0

All these language mappings, and instructions, are built-in and predefined as axioms in the language and runtime.

Methods on primitive types

However, there's still some operations in the language which don't map directly onto instructions provided by the runtime. For example, toString, parse, or the implementation of a generic Comparable interface. As Java and C# are both object-oriented languages, these methods need to be defined as part of an object of some kind. For 4-byte integers, these methods are defined on System.Int32 in C#, and on java.lang.Integer in Java.

The difference

It's these objects and methods that are the key to understanding the differences between primitive types in Java and C#. Lets start off with Java:

java.lang.Integer

Like all other types in Java, java.lang.Integer is a reference type, which contains a single field of the primitive type int. It's just like any other reference type in Java. It's this type that contains the various methods that act on an int, like toString, parseInt, compareTo, implemented either as a static method that takes or returns an int argument where appropriate, or as an instance method on java.lang.Integer that operates on the instance's int field.

Prior to Java 1.5, you had to manually convert to and from int to java.lang.Integer, using the constructor on Integer or calling the instance method Integer.intValue() to get contained int value. In 1.5, the compiler inserts these conversions where appropriate as part of the autoboxing feature.

The important point is that, in Java, an int is a pure 4-byte number, operated on by instructions built-in to the runtime. java.lang.Integer contains all the other operations on integers that can't be compiled directly to runtime instructions. It's just like any other reference type in Java. When necessary, you can create an instance of Integer from an int value to pass an integer value to methods expecting an instance of Object or other reference type.

System.Int32

Similar to java.lang.Integer, System.Int32 is the type containing all the methods on integer values that don't map directly onto operations provided by the runtime. But, where Integer is a reference type, System.Int32 is a value type. This has some quite fundamental consequences to what an integer value is in C#. To understand what these are, we need to take a digression as to how a value type is represented in .NET.

Value types in C#

An instance of a reference type is assigned its own block of memory on the heap. But a value type borrows memory from an object containing that value type. If it is declared as a member of a reference type, it will use a section of memory that belongs to the reference type on the heap. If manipulating it on the stack, it uses a section of the stack.

If the value type is a member of an outer value type, the inner value type becomes part of the value of the outer value type. For example, the following type definitions:

public struct Inner1
{
    int I1;
    short S1;
    short S2;
}

public struct Inner2
{
    double D1;
}

public struct ValueA
{
    int I2;
    Inner1 v1;
    Inner2 v2;
}

public class ObjectA
{
    float F1;
    ValueA A;
    int I3;
}

will result in the following memory layout for instances of type ObjectA on the heap:

<object header>
F1
ValueA.I2
Inner1.I1
Inner1.S1
Inner1.S2
Inner2.D1
I3

Recursive definition?

So, back to System.Int32. If you have a look at this type in a disassembler, you'll see that its definition is, in IL:

.class public System.Int32 extends System.ValueType
{
    .field assembly int32 m_value
}

This looks like a recursive definition, violating the .NET rule that a struct cannot contain an instance of itself. But it obviously does work, somehow.

The key is the Incompleteness Theorum. int32 is a built-in primitive type that the CLR itself implements using a 4-byte value. The struct System.Int32 is a (more-or-less) standard value type. A value type is comprised of the values of its member fields. System.Int32 is comprised of a single 4-byte value. That means that an instance of System.Int32 is also a pure 4-byte value.

This is the key to understanding primitive types in .NET - any 4-byte value in memory can be interpreted as a primitive int32, that can be manipulated by built-in arithmetic operations, or an instance of System.Int32, on which the CLR can execute all the methods declared on that type. That change in interpretation can occur without any changes in the program's memory, or any boxing operations, the CLR simply chooses to see a 4-byte value as a primitive type one instant, and a complex value type the next.

What is a primitive?

While primitive types in Java are simple values, values of primitive types in .NET are both a primitive type value and a complex value type value. Byte values of the correct length can be interpreted either as primitive types or complex types, thanks to the rules determining how value types use memory in the CLR.

As Java does not allow complex value types to be declared, the methods performing operations that aren't built-in to the runtime must be declared on a separate reference type, and the primitive types converted to and from this representation using autoboxing where needed. The CLR simply reinterprets a value as a primitive or complex value type.

That's started us off with the primitives. In the next post, we'll be looking at arrays, and how Java and C# arrays can be used, and what they represent.

Friday, November 8, 2013

C# via Java: Introduction


So, I've recently changed jobs. Rather than working in .NET land, I've migrated over to Java land.

But never fear! I'll continue to peer under the covers of .NET, but my next series will use my new experience in Java to explore the design decisions made in the development of the C# programming language.

After all, the design of C# was based on Java 1.2, and both languages have continued to evolve since then, incorporating modern software engineering concepts and requirements. Exploring the differences and similarities between the two will (hopefully) give us a deeper understanding into why .NET is implemented the way it is, the trade-offs involved, and what choices were made when new features were designed and added to the language and framework.

Among others, I'll be looking at differences in:

  • Primitives
  • Operators
  • Generics
  • Exceptions
  • Accessibility
  • Collections
  • Delegates and inner classes
  • Concurrency

In my next post, I'll start off by looking at the type primitives available in each language, and how Java and C# actually incorporate two different concepts of primitive types in their fundamental language design and use.

I'm also thinking of looking at the inner details of Java and the JVM in my blogs, as well as C# and the CLR. If you've got any comments or thoughts on this, please let me know.

Monday, June 3, 2013

Why unhandled exceptions are useful


It's the bane of most programmers' lives - an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you've got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application's unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what's wrong.

However, it's good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid.

Coding assumptions

Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn't in the collection already, using a supplied IEqualityComparer:

public static T[] ToArrayWithItem(
    ICollection<T> coll, T obj, IEqualityComparer<T> comparer)
{
    // check if the object is in collection already
    // using the supplied comparer
    foreach (var item in coll)
    {
        if (comparer.Equals(item, obj))
        {
            // it's in the collection already
            // simply copy the collection to an array
            // and return it
            T[] array = new T[coll.Count];
            coll.CopyTo(array, 0);
            return array;
        }
    }
    
    // not in the collection
    // copy coll to an array, and add obj to it
    // then return it
    T[] array = new T[coll.Count+1];
    coll.CopyTo(array, 0);
    array[array.Length-1] = obj;
    return array;
}

What's all the assumptions made by this fairly simple bit of code?

  1. coll is never null
  2. comparer is never null
  3. coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array.
  4. The enumerator for coll returns all the items in the collection, in the order defined for the collection
  5. comparer.Equals returns true if the items are equal (for whatever definition of 'equal' the comparer uses), false otherwise
  6. comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T
  7. coll will have less than 4 billion items in it (this is a built-in limit of the CLR)
  8. array won't be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR)
  9. There are no threads that will modify coll while this method is running
and, more esoterically:
  1. The C# compiler will compile this code to IL according to the C# specification
  2. The CLR and JIT compiler will produce machine code to execute the IL on the user's computer
  3. The computer will execute the machine code correctly
That's a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow.

An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you've gained about your own code, along with the modified assumptions.

When code is running with an assumption or invariant it depended on broken, then the result is 'undefined behaviour'. Anything can happen, up to and including formatting the entire disk or making the user's computer sentient and start doing a good impression of Skynet. You might think that those can't happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it's important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done.

What does this mean in practice?

To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that's some more assumptions you've made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken.

Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well.

For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don't necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what's wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds.

On my coding soapbox...

A few pet peeves of mine around assumptions. Firstly, catch-all try blocks:

try
{
    ...
}
catch { }
A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It's much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That's a pretty big ask!

Secondly, using as when you should be casting. Doing this:

(obj as IFoo).Method();
or this:
IFoo foo = obj as IFoo;
...
foo.Method();
when you should be doing this:
((IFoo)obj).Method();
or this:
IFoo foo = (IFoo)obj;
...
foo.Method();
There's an assumption here that obj will always implement IFoo. If it doesn't, then by using as instead of a cast you've turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it's far easier to figure out what's wrong.

Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don't leave it up to whoever's debugging your code after you to figure it out.

Conclusion

It's better to crash out and fail-fast when an assumption is broken. If it doesn't, then there's likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren't good per-se, but they give you some very useful information about your code that you didn't know before. And that can only be a good thing.

Tuesday, May 28, 2013

.NET Security Part 4


Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains.

DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong.

1. AppDomainSetup.ApplicationBase

The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 - the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let's explore what happens when they are the same, and an exception is thrown.

In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it):

public class UntrustedPlugin : MarshalByRefObject, IPlugin
{
    // implements IPlugin.MethodToDoThings()
    public void MethodToDoThings()
    {
        throw new EvilException();
    }
}

[Serializable]
internal class EvilException : Exception
{
    public override string ToString()
    {
        // show we have read access to C:\Windows
        // read the first 5 directories
        Console.WriteLine("Pwned! Mwuahahah!");
        foreach (var d in
            Directory.EnumerateDirectories(@"C:\Windows").Take(5))
        {
            Console.WriteLine(d.FullName);
        }
        return base.ToString();
    }
}

And in the controlling assembly:

// what can possibly go wrong?
AppDomainSetup appDomainSetup = new AppDomainSetup {
    ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase
}

// only grant permissions to execute
// and to read the application base, nothing else
PermissionSet restrictedPerms = new PermissionSet(PermissionState.None);
restrictedPerms.AddPermission(
    new SecurityPermission(SecurityPermissionFlag.Execution));
restrictedPerms.AddPermission(
    new FileIOPermission(FileIOPermissionAccess.Read,
        appDomainSetup.ApplicationBase);
restrictedPerms.AddPermission(
    new FileIOPermission(FileIOPermissionAccess.pathDiscovery,
        appDomainSetup.ApplicationBase);

// create the sandbox
AppDomain sandbox = AppDomain.CreateDomain(
    "Sandbox", null, appDomainSetup, restrictedPerms);

// execute UntrustedPlugin in the sandbox
// don't crash the application if the sandbox throws an exception
IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap(
    "Sandboxed.dll", "UntrustedPlugin");
try
{
    o.MethodToDoThings()
}
catch (Exception e)
{
    Console.WriteLine(e.ToString());
}

And the result?

Oops. We've allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property:

The application base directory is where the assembly manager begins probing for assemblies.

When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly's appdomain (as it's marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain's ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed.

So the problem isn't exactly that the sandboxed appdomain's ApplicationBase is the same as the controlling appdomain's, it's that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox.

The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don't allow the sandbox permissions to access the controlling appdomain's ApplicationBase directory. If you do this, then the sandboxed assembly can't be accidentally loaded into the fully-trusted appdomain, and the code can't be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn't, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done.

2. Loading the sandboxed dll into the application appdomain

As an extension of the previous point, you shouldn't directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface).

If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application's appdomain. The code in assemblies in the reflection-only context can't be executed, it can only be reflected upon, thus protecting your appdomain from malicious code.

3. Incorrectly asserting permissions

You should only assert permissions when you are absolutely sure they're safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc:

[SecuritySafeCritical]
public static string GetFileText(string filePath)
{
    new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert();
    return File.ReadAllText(filePath);
}

Be careful when asserting permissions, and ensure you're not providing a loophole sandboxed dlls can use to gain access to things they shouldn't be able to.

Conclusion

Hopefully, that's given you an idea of some of the ways it's possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn't base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

Thursday, May 16, 2013

.NET Security Part 3


You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts.

But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox.

Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I'm not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I'm going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control.

Defining assemblies as fully-trusted

In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain:

// get the Assembly object for the assembly
Assembly assemblyWithApi = ...    

// get the StrongName from the assembly's collection of evidence
StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>();

// create the sandbox
AppDomain sandbox = AppDomain.CreateDomain(
    "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName);

Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox.

So now you have a class that you want the sandboxed code to call:

// within assemblyWithApi
public class MyApi
{
    public static void MethodToDoThings() { ... }
}

// within the sandboxed dll
public class UntrustedSandboxedClass
{
    public void DodgyMethod()
    {        
        ...
        MyApi.MethodToDoThings();
        ...
    }
}

However, if you try to do this, you get quite an ugly exception:

MethodAccessException: Attempt by security transparent method 'UntrustedSandboxedClass.DodgyMethod()' to access security critical method 'MyApi.MethodToDoThings()' failed.
Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code.

Security transparency and AllowPartiallyTrustedCallersAttribute

So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api:

[SecuritySafeCritical]
public static void MethodToDoThings() { ... }

However, this doesn't solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What's going on?

By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code - as we'll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes.

This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails.

So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly:

[assembly: AllowPartiallyTrustedCallers]

and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application.

Conclusion

That's the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don't, then you could inadvertently open a security hole. I'll be looking at ways this can happen in my next post.

Tuesday, May 7, 2013

.NET Security Part 2


So, how do you create partial-trust appdomains? Where do you come across them?

There are two main situations in which your assembly runs as partially-trusted using the Microsoft .NET stack:

  1. Creating a CLR assembly in SQL Server with anything other than the UNSAFE permission set. The permissions available in each permission set are given here.
  2. Loading an assembly in ASP.NET in any trust level other than Full. Information on ASP.NET trust levels can be found here. You can configure the specific permissions available to assemblies using ASP.NET policy files.

Alternatively, you can create your own partially-trusted appdomain in code and directly control the permissions and the full-trust API available to the assemblies you load into the appdomain. This is the scenario I'll be concentrating on in this post.

Creating a partially-trusted appdomain

There is a single overload of AppDomain.CreateDomain that allows you to specify the permissions granted to assemblies in that appdomain - this one. This is the only call that allows you to specify a PermissionSet for the domain. All the other calls simply use the permissions of the calling code. If the permissions are restricted, then the resulting appdomain is referred to as a sandboxed domain.

There are three things you need to create a sandboxed domain:

  1. The specific permissions granted to all assemblies in the domain.
  2. The application base (aka working directory) of the domain.
  3. The list of assemblies that have full-trust if they are loaded into the sandboxed domain.

The third item is what allows us to have a fully-trusted API that is callable by partially-trusted code. I'll be looking at the details of this in a later post.

Granting permissions to the appdomain

Firstly, the permissions granted to the appdomain. This is encapsulated in a PermissionSet object, initialized either with no permissions or full-trust permissions. For sandboxed appdomains, the PermissionSet is initialized with no permissions, then you add permissions you want assemblies loaded into that appdomain to have by default:

PermissionSet restrictedPerms = new PermissionSet(PermissionState.None);

// all assemblies need Execution permission to run at all
restrictedPerms.AddPermission(
    new SecurityPermission(SecurityPermissionFlag.Execution));

// grant general read access to C:\config.xml
restrictedPerms.AddPermission(
    new FileIOPermission(FileIOPermissionAccess.Read, @"C:\config.xml"));

// grant permission to perform DNS lookups
restrictedPerms.AddPermission(
    new DnsPermission(PermissionState.Unrestricted));

It's important to point out that the permissions granted to an appdomain, and so to all assemblies loaded into that appdomain, are usable without needing to go through any SafeCritical code (see my last post if you're unsure what SafeCritical code is). That is, partially-trusted code loaded into an appdomain with the above permissions (and so running under the Transparent security level) is able to create and manipulate a FileStream object to read from C:\config.xml directly. It is only for operations requiring permissions that are not granted to the appdomain that partially-trusted code is required to call a SafeCritical method that then asserts the missing permissions and performs the operation safely on behalf of the partially-trusted code.

The application base of the domain

This is simply set as a property on an AppDomainSetup object, and is used as the default directory assemblies are loaded from:

AppDomainSetup appDomainSetup = new AppDomainSetup {
    ApplicationBase = @"C:\temp\sandbox",
};

If you've read the documentation around sandboxed appdomains, you'll notice that it mentions a security hole if this parameter is set correctly. I'll be looking at this, and other pitfalls, that will break the sandbox when using sandboxed appdomains, in a later post.

Full-trust assemblies in the appdomain

Finally, we need the strong names of the assemblies that, when loaded into the appdomain, will be run as full-trust, irregardless of the permissions specified on the appdomain. These assemblies will contain methods and classes decorated with SafeCritical and Critical attributes. I'll be covering the details of creating full-trust APIs for partial-trust appdomains in a later post. This is how you get the strongnames of an assembly to be executed as full-trust in the sandbox:

// get the Assembly object for the assembly
Assembly assemblyWithApi = ...    

// get the StrongName from the assembly's collection of evidence
StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>();

Creating the sandboxed appdomain

So, putting these three together, you create the appdomain like so:

AppDomain sandbox = AppDomain.CreateDomain(
    "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName);

You can then load and execute assemblies in this appdomain like any other. For example, to load an assembly into the appdomain and get an instance of the Sandboxed.Entrypoint class, implementing IEntrypoint, you do this:

IEntrypoint o = (IEntrypoint)sandbox.CreateInstanceFromAndUnwrap(
    "C:\temp\sandbox\SandboxedAssembly.dll", "Sandboxed.Entrypoint");

// call method the Execute method on this object within the sandbox
o.Execute();

The second parameter to CreateDomain is for security evidence used in the appdomain. This was a feature of the .NET 2 security model, and has been (mostly) obsoleted in the .NET 4 model. Unless the evidence is needed elsewhere (eg. isolated storage), you can pass in null for this parameter.

Conclusion

That's the basics of sandboxed appdomains. The most important object is the PermissionSet that defines the permissions available to assemblies running in the appdomain; it is this object that defines the appdomain as full or partial-trust. The appdomain also needs a default directory used for assembly lookups as the ApplicationBase parameter, and you can specify an optional list of the strongnames of assemblies that will be given full-trust permissions if they are loaded into the sandboxed appdomain.

Next time, I'll be looking closer at full-trust assemblies running in a sandboxed appdomain, and what you need to do to make an API available to partial-trust code.

Thursday, May 2, 2013

.NET Security Part 1


Ever since the first version of .NET, it's been possible to strictly define the actions and resources a particular assembly can use, and, using Code Access Security, permissions to perform certain actions or access certain resources can be defined and modified in code. In .NET 4, the system was completely overhauled. Today, I'll be starting a look at what the security model is in .NET 4, how you use it, and what you can do with it.

Partial and full-trust assemblies

Most developers aren't affected by the .NET 4 security model. This is because it only affects assemblies loaded as partial-trust. All assemblies loaded directly on a desktop system, either as part of a desktop application, or a web application on a full-trust appdomain (the default), or loaded as UNSAFE into SQL Server, run as full trust. This means they have full access to the system, and can do whatever they want with no restrictions.

But when an assembly is loaded into a partial-trust appdomain, the actions and resources it can access are limited to the permissions that are granted to it. For example, partially-trusted code can only read a file on disk if it's have explicitly been given FileIOPermission to read the file. And code can only access the registry if it's been given RegistryPermission to do so (all the available permissions that can be granted and denied from partial-trust assemblies inherit from System.Security.CodeAccessPermission). This is to limit what untrusted assemblies can do, and stop assemblies containing potentially damaging code from accessing the system and doing something it shouldn't (say, format the disk).

When a certain permission is required to perform an action, the entire call stack leading up to the call performing that action (for example, File.OpenRead or Registry.OpenSubKey) is checked for that permission. If there is any method on the call stack that is running as partial-trust, and has not explicitly been granted the required permission, then the call fails with a SecurityException, even if the partially-trusted method that failed the permission check is many stackframes down.

This ensures that there is no way for partially-trusted code to get round permissions that haven't been granted to it by delegating to or exploiting something else that does have the permission. If the partial-trust code is running, then it is on the call stack. And if it's on the call stack, then it cannot directly or indirectly perform actions that it hasn't been given permissions for.

But this is a problem - there are legitimate situations in which partially-trusted code needs to perform security-critical actions, even if it hasn't been given permissions to do so directly. For example, a desktop application (running as full trust) loads an addin into a partial trust appdomain. The desktop app provides an API to the addin to update values in a configuration file on disk, but doesn't grant general read-write access to the config file. However, when the addin tries to update the config file through the API provided by the full-trust application, such updates will always fail with a SecurityException, because the partial-trust code in the addin doesn't have permissions to write to the filesystem, even though it is the full-trust code that is actually writing to the config file.

So, there needs to be a way for full-trust code to override the permission check in a secure way such that it can perform security-critical actions on behalf of partial-trust code, once it has verified the partial-trust code is not misbehaving. There are two features that allow this - permission asserts, and security transparency.

Permission demands and asserts

To start a stack walk to check for a certain permission, you simply demand it, either using a method call:

public void UpdateConfigFile(string propName, bool newValue)
{
    new FileIOPermission(FileIOPermissionAccess.Write, "C:\config.xml").Demand();
    // .. write to the file ..
}
or using an attribute, which demands the permission when the method is called. This is functionally identical to calling Demand() as the first statement in the method:
[FileIOPermission(SecurityAction.Demand, Write = "C:\config.xml")]
public void UpdateConfigFile(string propName, bool newValue)
{
    // .. write to the file ..
}

(Note that demanding a FileIOPermission directly isn't normally required, as the BCL methods that access the filesystem all demand the appropriate permissions themselves).

Then, to stop a stack walk for a permission from checking past the current stack frame and hitting partially-trusted code, you assert the same permission before calling the method that demands it. Again, you can do this in code:

public void ChangeConfigProperty(string propName, bool newValue)
{
    new FileIOPermission(FileIOPermissionAccess.Write, "C:\config.xml").Assert();
    UpdateConfigFile("C:\config.xml", propName, newValue);
}
or using an attribute:
[FileIOPermission(SecurityAction.Assert, Write = "C:\config.xml")]
public void ChangeConfigProperty(string propName, bool newValue)
{
    UpdateConfigFile("C:\config.xml", propName, newValue);
}

After a permission has been asserted, any security check for that permission triggered by method calls after that point in the same method stops at that stack frame.

For this to be effective, there has to be a way to stop partially trusted code from simply asserting whatever permissions it wants, but still being able to call trusted code that can assert those permissions. This is where security transparency comes in.

Security transparency

There are three security levels code runs under - Transparent, SafeCritical, and Critical. Each imposes restrictions on what the code can do and what it can call, independant of any code access permissions applied to it:

Transparent
This is the security level all partially-trusted code runs under. There are several restrictions imposed on transparent code; in particular, transparent code cannot do the following:
  • Call Critical-security code (but it can call SafeCritical code).
  • Assert additional permissions.
  • Contain unsafe or unverifiable code.
  • Call P/Invoke methods.
  • Override or inherit Critical types and methods.
SafeCritical
This is the 'broker' between Transparent and Critical-security code. Transparent code cannot call Critical code directly, but it can call SafeCritical code, which in turn can call Critical code. SafeCritical code verifies the caller isn't trying to do something it shouldn't, then either performs the action itself or passes it to a Critical method. There is no restriction on what SafeCritical code can do.
Critical
This is the level all fully-trusted code runs under. There is no restriction on what Critical code can do.
This diagram shows the relationships between the different levels, and what each level can call (green represents allowed method calls between levels, red disallowed calls):

Putting it together

Permission asserts and security transparency allow a partially-trusted assembly (running transparent-security code) to call an API in a fully-trusted assembly (running Critical and SafeCritical code) to perform actions that the partially-trusted assembly doesn't itself have permissions for:

public void PartiallyTrustedMethod()
{
    ChangeConfigProperty("Enabled", true);
}

[SecuritySafeCritical]
public void ChangeConfigProperty(string propName, bool newValue)
{
    // check propName isn't too long,
    // escape any sequences that could be dangerous
    Sanitise(propName);
    
    // assert that its ok to write to the config file
    new FileIOPermission(FileIOPermissionAccess.Write, "C:\config.xml").Assert();
    
    UpdateConfigFile("C:\config.xml", propName, newValue);
}

[SecurityCritical]
public void UpdateConfigFile(string file, string propName, bool newValue)
{
    // this demands Write permission to the config file
    File.WriteAllLines(file, new[] { "propName = " + newValue });
}

Next time, we'll look at how you create a partially-trusted appdomain in your own code, and how you can run fully-trusted code in a partially-trusted appdomain.

Friday, April 19, 2013

Inside Portable Class Libraries


Portable Class Libraries were introduced with Visual Studio 2010 SP1 to aid writing libraries that could be used on many different platforms - the full .NET 4/4.5 framework, Windows Phone, Silverlight, Xbox, and Windows Store apps. You simply select which platforms and versions you want to target, then the available subset of APIs are magically available. But how does it work? How does Visual Studio know what it can target, and how does the same assembly run on many different platforms? Today, I'll be finding out.

Creating a Portable Class Library

When you create a PCL in Visual Studio, you select the platforms and versions you want to target. In this example, I've selected everything at the lowest available version. In the project references list, this turns into a generic '.NET Portable Subset', with no real identifying information as to what it actually is:

Hmm, ok, well lets see what the actual built assembly does with it. Lets create a field of type Action<T1,T2> so the assembly actually has something in it:

public class Class1 {
    Action<int, double> action = null;
}

After building the assembly, and opening it up in a decompiler, we can see that that mysterious '.NET Portable Subset' reference has turned into standard assembly references to mscorlib.dll and System.Core.dll. However, they look a bit odd:

mscorlib, Version=2.0.5.0, Culture=neutral,
    PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes
System.Core, Version=2.0.5.0, Culture=neutral,
    PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes

2.0.5.0 is the version number used by Silverlight assemblies, and that's the Silverlight public key, but that 'Retargetable' flag is new. And if you have a look at the assembly-level attributes, you'll spot something familiar:

[assembly: TargetFramework(
    ".NETPortable,Version=v4.0,Profile=Profile1",
    FrameworkDisplayName=".NET Portable Subset")]

Aha! There's the '.NET Portable Subset' from the Visual Studio reference list. But what about the target framework? ".NETPortable,Version=v4.0,Profile=Profile1"? What's that all about? Well, have a look in 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETPortable\v4.0\Profile\'. In there is a list of every possible .NET 4 PCL subset you can target (22 in total). Within each profile directory are the available assemblies containing the types that can be used, and an xml file for each framework supported by that profile containing platform version information.

These profile directories have been pre-calculated by Microsoft and installed alongside visual studio. When you create a PCL project, and select the platforms and versions you want supported, Visual Studio looks at all the available profiles and the framework versions they are valid for. From the version and platform information in each profile it works out the most applicable profile, and dlls in that profile are the ones it compiles the PCL assembly against and to provide intellisense support.

But these dlls are useless at runtime. If you open one of the dlls in a decompiler, you'll see that all the method bodies are empty or simply return the default value for the return type. These dlls exist only to be referenced and compiled against.

Using a portable class library

So the dlls in the Reference Assemblies folder are, rather unsuprisingly, only to be referenced. Something else happens at runtime to make the portable library work on all the supported frameworks.

It turns out that it all comes down to a feature of .NET assemblies that was introduced in .NET 2, and I looked at two years ago - type forwards. In the portable class library I've built, the System.Action`2 type I've used has been resolved to the System.Core assembly. In different platforms, it may be in different places. But every platform will either contain the type in System.Core, or System.Core will have a type forward to where the type is actually located.

So, as you'll see in the framework-specific reference assemblies, Silverlight 4, Windows Phone, and Xbox all have System.Action`2 located in their System.Core.dll, so the type is resolved successfully on those platforms. Both the desktop and Silverlight 5 System.Core.dll have a type forward for System.Action`2 to the relevant mscorlib.dll, where the type is actually located.

Windows store applications (the framework for windows store applications is called '.NETCore') forward the type to System.Runtime.dll. And, if you take a further look at the System.Core.dll in the .NETCore framework, this assembly contains no types whatsoever! The only things in that assembly of any note are a series of type forwards to various other assemblies in the .NETCore framework - that assembly exists only to redirect type references in portable class libraries when they are used in Windows Store applications.

Cross-version assembly references

There is one more thing we need to sort out. If you have a look at the assembly references in the original PCL we built, they reference a specific version of mscorlib.dll and System.Core.dll:

mscorlib, Version=2.0.5.0, Culture=neutral,
    PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes
System.Core, Version=2.0.5.0, Culture=neutral,
    PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes

These versions are the same as the version numbers on Silverlight 4, Windows Phone, and Xbox framework assemblies. But the version of mscorlib for Silverlight 5 is:

mscorlib, Version=5.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e
and .NET 4 desktop and .NETCore:
mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089

This is a problem. These assemblies all have strong name signatures, and the version & public key form part of the assembly's identity. An assembly reference to an assembly with version 2.0.5.0 and public key 7cec85d7bea7798e cannot be resolved to an assembly with version 4.0.0.0 and public key b77a5c561934e089. To the CLR, these are two completely different assemblies.

That's where the Retargetable flag on the assembly references comes in. If this flag is on an assembly reference, it means the reference can resolve to an assembly with a different version and public key, even though it is technically a different assembly. This flag is on all the references to framework dlls in a PCL, and this means the PCL can run on different frameworks with different assembly versions and public keys. The framework dll references are resolved to the one available in the framework the library is executing on at runtime.

Conclusion

There's nothing magic about portable class libraries. They are compiled just like any other assembly, but are compiled against a specific pre-defined subset of the libraries available in the different frameworks, defined by portable profiles representing the various combinations of types and methods available. When the library is executing on a specific framework at runtime, type fowards redirect any types that have been moved to a different assembly in that framework. The common CLR, assembly metadata and IL formats across all the frameworks and versions ensure the actual code logic in the assembly executes the same way on any available framework.

Thursday, April 18, 2013

Subterranean IL: ThreadLocal revisited


Last year, I looked at the ThreadLocal type as it exists in .NET 4. In .NET 4.5, this type has been completely rewritten. In this post, I'll be looking at how the new ThreadLocal works in .NET 4.5. I won't be looking at all the implementation details, but concentrating on how this type works. Again, it's recommended you have the type open in a decompiler.

No More Generics!

The most obvious change is the lack of generic classes - it no longer uses generic instantiations to store individual thread static variables. Instead, it uses a design similar to that of ConcurrentBag - an array of values held in a thread static array, with each instance of ThreadLocal being assigned its own index into that array, and linked lists between the items in each thread static array to allow access from any thread.

The important variables here are the thread static ts_slotArray, m_idComplement and m_linkedSlot. Each thread has its own ts_slotArray instance, and each instance of ThreadLocal has its own slot index into those arrays as m_idComplement (I'm ignoring the fact that ThreadLocal is generic for now; each generic instantiation of ThreadLocal has its own static variables independant from any other). The list of all values stored in each instance is accessible through the linked list accessible from m_linkedSlot.

However, these extra links between arrays mean that the value to be stored can't be put straight into ts_slotArray, you need an extra type to provide these links. This is where the LinkedSlot type comes in - it provides a Next and Previous fields to link between slots in different arrays. This graph indicates how these different fields interact - the arrows represent the Next and Previous references between slots:

Note that the instance of LinkedSlot directly referenced by the m_linkedSlot field is an empty instance that is not stored in any array; it exists only to be the target of another slot's Previous field, and simplifies the logic in the other methods.

Setting values

Each instance of ThreadLocal is assigned a unique index by the IdManager class when it is created. When a thread first sets a value in an instance of ThreadLocal, the following happens in SetValueSlow:

  1. If the slot array hasn't been assigned for this thread (ie this is the first time this thread has accessed any instance of ThreadLocal), it creates a new array to hold enough items for this instance's slot index.
  2. If the array isn't big enough for this instance's slot index, it is resized so it is, and all the containing LinkedSlots are updated to point to the new array (in the GrowTable method).
  3. CreateLinkedSlot is called to create a new LinkedSlot instance and store it in the array at the instance's slot index. It also adds it to the head of the linked list pointed to by m_linkedSlot in this instance of ThreadLocal.

Subsequently, when values are get & set, it gets and sets the value at the slot index owned by the ThreadLocal being accessed, in the slot array for the accessing thread.

Removing and disposing of ThreadLocals

So that's what happens when values are set. What about when the thread is no longer running, or the ThreadLocal is disposed? Both require values to be removed or unset in the arrays & untangled in the linked lists, else any values set will just stay there, won't be collected, and will cause a memory leak.

  1. ThreadLocal.Dispose

    When an instance of ThreadLocal is disposed or finalized, it needs to clear the instances of LinkedSlot in all the referenced slot arrays. Fortunately, this is quite easy to do - it simply iterates through the link list defined by m_linkedSlot, and clears the entries. Finally, it returns the slot index it was using to the IdManager class to be reused when the next instance of ThreadLocal is created.

  2. Thread exit

    Dealing with a thread exit is harder, as there isn't a global event that fires whenever a thread exits. Fortunately, a little-known feature of thread statics can be used to clear up the slot array belonging to a thread that has exited.

Detecting thread exits

Normal static fields, once the type has been initialized, stay around until the AppDomain exits. That means that any object being referenced by a static field won't be collected until the field is explicitly cleared.

However, thread static fields are different. The CLR keeps track of which threads are active, and which have exited. It can link this to the various values stored in a thread static field. This means that any value set on a thread static field belonging to a thread that has exited, and that isn't referenced by anything else, is eligible for garbage collection, and will be collected the next time the garbage collector runs.

This feature is exploited by ThreadLocal to clear up the slot arrays of exited threads. This is primarily performed by the FinalizationHelper class, which is created and assigned to a thread static field when the slot array is first created and assigned.

FinalizationHelper

This class only exists for its finalizer. When a thread exits, the corresponding instance of FinalizationHelper assigned to the ts_finalizationHelper field becomes eligible for collection. If and when the garbage collector runs, this instance gets collected, and the finalizer is run. This finalizer removes any non-empty slots from the linked lists of active ThreadLocal instances, unless the values are needed to be kept if a call to ThreadLocal.Values is made to return all the values ever set on that instance.

Conclusion

So there we are; the upgraded ThreadLocal. It's an improvement on the old version, in that it allows access to all the values ever set on an instance of ThreadLocal, it doesn't fallback on the thread local data store, and it doesn't pollute the namespace with thousands of generic instances of holder classes. Much better!

 

 

Copyright © simonc