In my last post, I introduced TouchToolkit – a toolkit for developing touch-enabled applications. This is the first of a multi-part post where I will explain how TouchToolkit can help simplify the development process of multi-touch applications in Silverlight or WPF 4.0.
While we can use the recorded touch interactions (I will explain the recorder module in another post), its better to have a touch-enabled device (e.g. Dell XT2) or an emulator (e.g. MultiTouchVista) to test the application.
First, we need to install the Visual Studio Templates for TouchToolkit. This will add a number of project templates and item templates in Visual Studio 2010. Although we can just add the reference of A toolkit to simplify the multi-touch application development and testing complexities dlls in an existing project, the project template is a better choice for new projects.
Let’s create a sample application in WPF 4.0. To start, we choose the “TouchToolkit for Windows 7” project template. The template will create a WPF 4.0 project with a few additional things:
- Some sample codes in MainWindow.xaml.cs that shows how to subscribe to gesture events, add visual feedbacks etc.
- The “TouchToolkit” folder that contains framework assemblies and necessary files to extend the framework (e.g. create new gesture, return type, etc.)
We could just press F5 and see the sample code in action. However, Let’s go through some of the important sections first.
Step 1: Initialize the framework
The A toolkit to simplify the multi-touch application development and testing complexities framework supports a number of devices including Windows 7 based tablets, TUIO based devices, Microsoft Surface and so on. So, we need to inform the framework about the current input source. To do so, we need to pass the right provider. This makes rest of the framework device independent and also allows to add new providers to support additional devices.
Since I am using a tablet that supports Windows 7 Touch (Dell XT2), I used the Windows7TouchInputProvider. The second parameter is the root panel that contains all UI elements (sorry, we currently only support Canvas as the root container) and the third parameter is a reference to the current project assembly. This allows to extend the toolkit (e.g. new gestures, return types, visual effects) and the framework automatically search this assembly for new types.
Next, we want to get visual feedbacks when someone touches the screen. So, we we added a touch feedback. The toolkit currently provides one touch feedback (i.e. BubblesPath). However, we can easily create our own visual effect class and use them instead. Another type of visual feedback is the gesture-feedback. Like touch, we can also specify visual feedback when a gesture is detected. For example, the following screenshot shows a visual effect that highlights the area selected by a lasso gesture.
Step 2: Subscribe to gesture events
The template added a few rectangles in the main window. The following code shows how we can add the gestures (drag, zoom, pinch, rotate) to each of the rectangle using the AddEvent method in EventManager. The AddEvent method takes three parameters:
the scope of the gesture (i.e. the rect UI element)
the name of the gesture, and
the callback method that will be invoked when the gesture is detected.
While the Gestures class contains the names of the gestures that are available out of the box, we can also define our own gesture and pass the name as string (e.g. “MyGesture”). I will explain the process of creating new gestures in another post. Following is the definition of the “Drag” gesture. As you can see, the return type may contain more that one objects.
So, the event argument (e.Values) of the callback method contains the return types specified in gesture definition. The Get<>() is an extension method that helps you get the return object you want.
We can also get the raw touch data from the following events:
- GestureFramework.EventManager.MultiTouchChanged: When there are multi-touch active touch points, this event will be raised once and the event argument will contain data for all active touch points.
- GestureFramework.EventManager.SingleTouchChanged: When there are multi-touch active touch points, this event will be raised for each of the touch points and the event argument of each callback will contain data for the specific touch point.
Now that we have reviewed the code generated by the templates, lets run the application. You should be able to drag, rotate and resize the blue and green rectangles. Here is a few more examples of gestures in a Silverlight application.
I hope I was able to give you some idea on TouchToolkit. In the up coming posts, I will explain rest of the toolkit including: how to define new gestures, the touch recorder and automated test framework.
If you are interested on developing multi-touch applications using TouchToolkit, I would be happy to help you any way I can.