Marko Apfel - Afghanistan/Belgium/Germany

Management, Architecture, Programming, QA, Coach, GIS, EAI

  Home  |   Contact  |   Syndication    |   Login
  182 Posts | 2 Stories | 163 Comments | 4 Trackbacks


Twitter | LinkedIn | Xing




Enterprise Library


SQL Server


Tuesday, July 14, 2015 #

By playing with F# and ArcGIS Runtime I tried to re-create a sample C# project to F#.

For the WPF part I used the Visual Studio extension F# Empty Windows App (WPF) and F# Windows App (WPF, MVVM) the NuGet packages FSharp.ViewModule.Core and FsXaml for WPF.

The XAML I copied more or less without changes to the F# project.


During running the app it immediately crashed with an error:

Additional information: 'Cannot create unknown type '{}MapView'.' Line number '12' and line position '10'.


Why ever here it needs an additional hint for the esri namespace to resolve the control:


And voila – the F# ArcGIS Runtime app showed up


btw: the designer was not complaining anything and displayed the control as expected.

Thursday, July 9, 2015 #

During compilation of the F# Empty Windows App (WPF) I got that error (check the output pane carefully).


To solve this issue you have to thrust of the XAML Type Provider of this project. This you can do via the Visual Studio options:


Wednesday, June 24, 2015 #

In too many projects I saw a chaos to store different versions of documents (in case these documents are edited over the time and older versions were kept for later analysis).

Each colleague has an own understanding how to name documents and how to add information about the status/version of the document. So it is not seldom to see timestamps as pre- or suffixes, user and user-acronyms as suffixes, and version numbers as suffixes. It gets really interesting if multiple users co-work on the same document and introduce their own naming guideline. After a few versions you are completely lost to know which document is the most recent version. Sometimes you can hope that sorting for timestamps of OS (Date modified) in the File Explorer leads you to the right one. But too often somebody opens a documents and changes by coincidence something inside (e.g. auto fields, like dates) and confirms the save question during closing with Yes. Then you have an older version with a newer timestamp.

Normally we use the compressed (without the dashes) ISO timestamp format as a prefix, but too often that rule was initially broken and later adjusted. So it leads to folder contents like that one


The first two files don’t follow the rule.

Imagine now the situation, that a lot of other and different files are inside the folder. It is almost impossible to recognize what belongs together. It get even more difficult, if the file name changed in the mean time.


I’m not happy with all the different versions inside the folder and I’m also not happy that we enrich the file name to have version information inside. But it gives the benefit, that with one view you immediately now how from when the document is – notably after sending the file between different parties.

Some years ago I tried to address that problem with Git – having only one file to each topic inside the folders and get the older versions out of the repository in case of necessity. But during that time (2011??) it was not handy. I cannot exactly remember the issues, but it was not worth to introduce that.

But yet the situation changed. Git can recognize the renaming of these doc files (e.g. if a new timestamp is prefixed or versions is suffixed) and contemporaneously track the changes inside these Word documents.

So I gave it a try to add these different versions each by each to the repository to fake the evolvement of the past. To be precise: what normally happens during work with the document in an ideal world (edit, rename, commit changes) I would do in a few minutes. Important was that it keeps to things historical together – and not to get individual commits or independent versions of the files.


To bring all the already created versions in a meaningful versioned way to a repository, I did the following.

  1. Move all the files to a temporary folder
  2. Run git init in the empty folder to create a local repository here
  3. Move the first version of the file (Folgeprojekt_Leistungsbeschreibung V1.doc) into that folder
  4. add it to the index (git add Folgeprojekt_Leistungsbeschreibung\ V1.doc)
  5. Commit that change set (git commit –m “initial commit with statement of work for the follow-up project, version 1”)
  6. Delete this file
  7. Move the second version into that folder
  8. Run git status to see the new (for the moment untracked) file and the deleted (already tracked) old/previous one
  9. add the new one to the index (git add Folgeprojekt_Leistungsbeschreibung\ V2.doc)
  10. add the deletion of the previous version to the index (git add Folgeprojekt_Leistungsbeschreibung V1.doc)
  11. run git status to see that Git is able to recognize the renaming
  12. Commit that changeset (git commit –m “add statement of work for the follow-up project, version 2”)
  13. Repeat the steps 6 till 12 for all the other versions

In the log you can see now all the individual changes in one line of the history of that file.

e.g. the diff between the version 1 and 2 shows this:


I guess, that via the similarity index


Git was able to understand, that a deleted and new file is only a new version – a renaming here.


So Git can create the history line of a given file. I was really surprised to see that, because the doc format is a binary and it needs some additional steps in the background to get that understanding.

Of course you see also the individual changes of the file



I tried this also with the docx Word format. It works in the same way as described above.



Given these capabilities in future projects I enforce the team to keep only the most recent version of a document in a folder. All versioning is to be done with the repository. This gives clean folders where you immediately get an overview about the distinct documents and it avoids the situation where you have to open multiple documents to understand the evolvement of a document over the time.

And additionally you have the opportunity to follow the evolvement by diffing the individual versions. With hopefully good commit messages you have another information what happens from one version to the next one.

By downloading the latest version of WLW you normally get the online installer. It is a small program wlsetup-web.exe which downloads during setup the necessary components.

Unfortunately this setup raised on different machines the error “Couldn’t install programs”.


Thankfully there is also an offline installer available, containing the complete bundle of live tools:

According some information in the web it is recommended to use “plain English”, otherwise the installer seems to try to connect to unavailable resources which ends up also in problems.


Of course, that wlsetup-all.exe is a little bit larger:


But it let you select the live tool what you need and after the installation everything is fine



Sunday, May 17, 2015 #

Saturday morning I received a mail from my parents NAS (Synology DS215j) about a reboot after a system update. A few hours later my father called me, mentioning that he cannot write to the NAS anymore. All files are accessible in read only mode. All write operations under windows raises an error.

German: "Sie benötigen Berechtigungen zur Durchführung des Vorgangs"

English: "You need permission to perform this action"

After some research it came to light, that the WinBind Daemon hasn’t started. It seems that the update of 11. May (rolled out 16. May) is corrupt and we have to hope for a fix in the future.

In the mean time the following workaround helps:

  1. Login into the web administration of the DS215j as admin
  2. Open Control Panel > Task Scheduler
  3. Create a new User-defined-script
  4. Run the command
    start winbindd ; restart smbd
  5. Schedule it pro forma
    (e.g. once per day, around 30min after possible reboots related to further updates)
  6. Run it manually

Now you should be able to write to the NAS …

Friday, January 9, 2015 #

XenServer provides several templates for different operation systems of its hosted virtual machines.


Unfortunately there is no template for ClearOS and so I tried some of the in my opinion best fitting templates.

Here is the list of my attempts:

CentOS 6 (64 bit)

Installation is possible in simple text only mode (black & white). Disabling Viridian doesn’t changed anything. (at the end of the article you will see what I mean with Viridian)

There is a boot parameter “graphical" per default set, but this means obviously something else or doesn’t work. Also I found a blog article where somebody mentioned to add the boot parameter “text”. I was surprised to read this hint – why a parameter named “text” should switch to a graphical environment? And as expected that parameter brought no help.

Debian Wheezy 7.0 (64-bit)

This virtual machine was not able to start at all and immediately returned:

Jan 9, 2015 6:45:09 PM Error: Starting VM 'Debian Wheezy 7.0 (64-bit) (1)' - The bootloader for this VM returned an error -- did the VM installation succeed?  INVALID_SOURCE
Unable to access a required file in the specified repository: file:///tmp/cdrom-repo-8Q3Fb0/install/vmlinuz.


Red Hat Enterprise Linux 6 (64-bit)

Same experiences as with CentOS. OK – this was to expect, both systems rely on the same sources.


Because I could install Windows 10 pretty easy on the Windows 8 template, I thought to give that combination a try. But during the “Storage” step I realized, that the disc must be 24GB in minimum. That was to much and I stopped that try here.

SUSE Linux Enterprise Server 11 SP2 (64-bit)

Same result as with “Debian Wheezy 7.0 (64-bit)”.

Oracle Enterprise Linux 6 (64-bit)

Same experiences as with CentOS.

Ubuntu Precise Pangolin 12.04 (64-bit)

Same result as with “Debian Wheezy 7.0 (64-bit)”. This is not surprising. Debian is the base for Ubuntu.

Other install media

Installation is possible in simple text only mode (black & white).

Other install media – disabled Viridian

As described earlier there is the possibility to deactivate Viridian. I don’t know in detail what happens inside the system, but it makes the difference.

You have to avoid an automatic boot after finishing the configuration. Un-tick this checkbox in the last step and create the new machine:


Now switch to the console of the XenServer


and run this command

xe vm-list

Search for the the UUID of the new machine (here 3a1744ce-cbec-73fa-fb19-ea9d9234c06e).

This UUID you use in the next command

xe vm-param-set uuid="3a1744ce-cbec-73fa-fb19-ea9d9234c06e" platform:viridian=false

Now you can start this virtual machine and you will get an installer with UI



This is a note to myself or for you in case you found that article via Google.

I can remember that one virtual machine template or configuration let the installation hang after a few seconds. The last two lines were:

trying to remount root filesystem read write... done
mounting /tmp as tmpfs... done

Use one of the other templates …

Thursday, January 8, 2015 #

Today I’ve installed the Technology Preview of Windows 10 onto my Xen Server. With the XenCenter program under an existing Windows 8.1 installation it was an easy task. But this Win 8.1 runs virtualized via Parallels and Bootcamp on my MacBook. That creates a double windowing – first the virtualized window of Win 8.1 with XenCenter and inside XenCenter the windows of the XenServer hosted Windows 10. This is not nice.

But we can shortcut that by using the Remote Desktop App for Mac OS X (you can find it in the App Store). Configuring a remote desktop connection is easy but it ended up in an error, that the connection couldn’t be established. Sounds like an issue where the RDP access has to be opened on the Windows 10.

And right, after doing the following I got my RDP session with a much better user experience – no lacks anymore and fluent like a local session.

  1. Right click Start button and chose “System”
  2. Click the link “Remote Settings”
  3. Activate “Allow remote connections to this computer” and tick the checkbox
  4. Click the button “Select Users”
  5. Add your account(s) to the list
  6. In the “Select Users” dialog you have to use this schema:
    Use the button “Check Names” to resolve and/or check it

Wednesday, January 7, 2015 #

Installing XenServer 6.2 to my new Intel NUC was not so straight forward as thought.

The downloaded ISO (XenServer-6.2.0-install-cd.iso) I burned on my MacBook with Unetbootin onto a USB stick. Unfortunately this stick was not bootable at all. Not in the NUC and also not on the MacBook.

So I switched to my “Standby-Windows” on Bootcamp and made the next attempt with YUMI. Now XenServer booted but a few installation steps later it hanged with “failed to load com32 file mboot.c32”. There are some advices in the web to fix such errors by replacing that file with one from a running installation. Too complex for my aim.

But one of the hints gave me the idea, that another burning program can fix that issue. Under are some more burning tools listed and I gave Rufus a try. And bingo! – that burner created a bootable USB stick which let me complete the installation.

I used these parameters:


Friday, January 2, 2015 #

Usually I use OPlayer or VLC to watch videos on my iPad.

Unfortunately some of them I cannot stream to the TV via AirPlay on the iPad over AppleTV – it loses the video. To be precise – the audio comes from the TV and the video still plays on the iPad. It seems that especially content from my NAS has that problem.

Now I figured out, that the player 8Player works well. This player is not that fancy as the others and handles only DLNA provided content. But because this protocol was already activated on my WD MyCloud it was nothing else to do.

Run 8Player, navigate through the DLNA videos, start playing and activate the AirPlay option to stream the video to your AppleTV.

Monday, December 8, 2014 #

After a longer period of blogging abstinence I had again to search the right plugin for Windows Live Writer (WLW) to get a pretty printed source code.

I remembered ones in the past which made me really happy. That one I couldn’t find anymore. So I write this article to have a guideline in the future and maybe to collect some advices from you reader, helping me to find that one I initially searched for...

btw: I really appreciate, if somebody could advice me a good blogging software for Mac. Currently WLW is the unbeaten champ, but also with Bootcamp and Parallels it is often not that fluent as I would wish.


That DLL (WindowsLiveWriter.SourceCode.dll) I found on an old hard disc. According the timestamp it is from 01. APR 2009. By copying it to the plugin folder (“C:\Program Files (x86)\Windows Live\Writer\Plugins”) it will be available with the next start of WLW as “Source code plug-in”.


Straight forward, focusing the necessary things.


Displaying in WLW

There is no hint, that this is specifically rendered content.


Displaying on blog

// A Hello World! program in C#.
using System;
namespace HelloWorld
    class Hello 
        static void Main() 
            Console.WriteLine("Hello World!");

            // Keep the console window open in debug mode.
            Console.WriteLine("Press any key to exit.");

PreCode Snippet

This plugin is available from and the newest version is from 07. MAR 2010. It comes as an msi and after installation you can find the resources under “C:\Program Files (x86)\FiftyEightBits\PreCode”.


The UI goes its own way.


Displaying in WLW

Okay – clearly, this a something special ..


Displaying on blog

// A Hello World! program in C#. using System; namespace HelloWorld { class Hello { static void Main() { Console.WriteLine("Hello World!"); // Keep the console window open in debug mode. Console.WriteLine("Press any key to exit."); Console.ReadKey(); } } }

Syntax Highlighter

For the moment it is the oldest plugin in my test. You can download it on as an msi. After the installation it appears as SyntaxHighlighter in the plugin list.

According the web page there should be an option dialog available – but I couldn’t find it.


Condensed to the minimum ..


Displaying in WLW

You cannot click inside that code fragment. It appears as a complete object and so you see this frame around – with the additional links on top.


Additionally on the right a side bar pops up where you can modify the code and set some properties


Displaying on blog

  1. // A Hello World! program in C#.  
  2. using System;  
  3. namespace HelloWorld  
  4. {  
  5.     class Hello   
  6.     {  
  7.         static void Main()   
  8.         {  
  9.             Console.WriteLine("Hello World!");  
  11.             // Keep the console window open in debug mode.  
  12.             Console.WriteLine("Press any key to exit.");  
  13.             Console.ReadKey();  
  14.         }  
  15.     }  

Code Highlighter

This plugin is based upon the the code of “Syntax Highlighter”. But instead of forking the original VB code base the author decided to rewrite in C#. Its on you to download an installer or the plain DLL and save that in the plugin folder as described earlier. The second option was sufficient for me. After a restart of WLW a new plugin named “Syntax highlighter”. I don’t know what the official name is – “Code Highlighter” or “Syntax highlighter”. One term appears in the URL and the other one in the plugin themselves.

This file is from 06. DEC 2009.


It is pretty much the same as its parent.


Displaying in WLW

Also similar to the parent only other default values for the properties, which you can again modify in the sidebar.


Displaying on blog

// A Hello World! program in C#.
using System;
namespace HelloWorld
    class Hello 
        static void Main() 
            Console.WriteLine("Hello World!");

            // Keep the console window open in debug mode.
            Console.WriteLine("Press any key to exit.");

Code Formatter Plugin for WLW

This plugin you can find here You can download an exe or msi, but because the exe calls the msi, you are fine with msi only.


It comes with a confusing UI. But you can choose another rendering engine and also the type of insert (as text or png). My first try under Parallels’ coherence mode completely ruined the interface. I was not able to use it at all. This is a test which I also have to make later with the other ones.

The UI is split in two parts – on the left side some options and on the right side more or less as an extension the code window.


Displaying in WLW

Again it is a plugin which freezes the code and shows a frame.


And similar to the others with this behavior it displays then a sidebar for adjustments.


Displaying in WLW

// A Hello World! program in C#.
using System;
namespace HelloWorld
    class Hello 
        static void Main() 
            Console.WriteLine("Hello World!");

            // Keep the console window open in debug mode.
            Console.WriteLine("Press any key to exit.");

Syntax Highlighter from Arnold Matusz

With finishing that article I found my old favorite. It is this plugin you get here Save the dll into the plugin folder and after a restart you can use it.


Everything we need and like wrapped in a clear straightforward UI


Displaying in WLW

It inserts the code with some formatting options in the back. So there is no frame or the opportunity to modify the appearance later with a sidebar. But of course you can edit the code themselves.


Displaying on blog

// A Hello World! program in C#.
using System;
namespace HelloWorld
    class Hello 
        static void Main() 
            Console.WriteLine("Hello World!");

            // Keep the console window open in debug mode.
            Console.WriteLine("Press any key to exit.");




Friday, December 5, 2014 #



How about a secure access from outside into the own home network? So we can maintain machines, change configurations, getting files, .. from where ever we are. For this we let a VPN tunnel be established between a local RPi in our network to a hosted server. If this hosted server that offers a web based console we only need a web browser to access our own resources at home.

The idea of the project comes from my colleague Michael and I like to thank him for the first inputs to get that running.

Starting point

Instead of mess up your official hosted server I can only recommend to start with one of the cheap offers to start. I take for such server playgrounds, which has a real nice package for only $5 per month. But in reality you pay only cents, because only running systems count. So my account, initially charged with $5 has still more than $4 for further tests.

But beside this, they also make it so easy to get a new machine up. For the creation you only define the name, choose the “hardware” specification and select the operation system – and seconds later you receive a mail with the credential and the information, that your machine is up and running. Amazing!

In case you are interested in testing this provider, let me know. Currently I can send you an invitation with a value of $10 or use this link (be aware, they request your credit card details, but don’t charge from it. it is only for future business with you and you can delete the details later). $10 - that’s enough for a long time play period.


Preparing the server

For this sample I choose a Debian based machine with the smallest hardware specification in New York.

btw: having a server somewhere outside your residence country, it offers you some interesting benefits. Why? Because you get an IP which let the surfed page not track where you really come from – you obfuscate the one your router gets from your provider – and location based services could offer you other things. 
So far I found the following:

  • Cheaper flight tickets
    Typical price watching portals try to offer you the prices from the area you come from, but the prices vary. offered my a 10% better offer for the same connection by another location.
  • Avoid blocked YouTube videos
    In Germany the GEMA (and others) let YouTube block a lot videos because of licensing issues ( Notably for videos with music you end up in “Dieses Video ist in Deutschland leider nicht verfügbar” (“Unfortunately, this video is not available in your country.”).

Okay – so let’s take this configuration now:


And not a half minute later your machine is online with a public IP address (here and after a few minutes you get the mail with your credentials.


Connect to the new server, update it and install OpenVPN

Now ssh to this machine, confirm the following question with yes and update your password. Use the IP and password you get via mail.

ssh root@ 

Let’s update the installation with

apt-get update

and install OpenVPN with

apt-get install openvpn

Creating the certificates and keys

The OpenVPN package contains some nice scripts (called easy-rsa) to create all the certificate stuff we need later. So let’s copy that stuff to a place with easier access and go to this folder.

cp -r /usr/share/doc/openvpn/examples/easy-rsa /etc/openvpn

cd /etc/openvpn/easy-rsa/2.0/

There is a file which contains the default properties for further certificate creations. So we adjust the content for our needs

nano vars

the last lines contain some export commands and that’s the place where we have to specify our values

export KEY_CITY="Freising"
export KEY_ORG="Private"
export KEY_EMAIL=""
export KEY_CN=Private
export KEY_NAME=Private
export KEY_OU=Private

After saving the file and quitting nano we source these variables

source ./vars

and clean our environment for the new certificates and keys


We don’t have certificates from a Certificate Authority so we create our own ones.
Therefore we start by faking us an own Certificate Authority


You see during the input that the default values are taken from our exported ones.

This creates some files below the “keys” folder


    -rw-r--r-- 1 root root 1306 Dec  5 13:49 ca.crt
    -rw------- 1 root root  920 Dec  5 13:49 ca.key


Time to create the keys for our OpenVPN server. You see the same game with default values here. At the end you confirm the two questions with “y”.

./build-key-server OpenVpnServer

We get some new files under the key folder


    -rw-r--r-- 1 root root 4002 Dec  5 13:53 OpenVpnServer.crt
    -rw-r--r-- 1 root root  712 Dec  5 13:53 OpenVpnServer.csr
    -rw------- 1 root root  916 Dec  5 13:53 OpenVpnServer.key


With the next command we create the Duffie Hellman stuff. On the Digital Ocean server this is done in seconds. I did the same on a Raspberry Pi for a similar project and had to wait around half an hour. So you can image how powerful the Digital Ocean equipment is!


which creates the next file


    -rw-r--r-- 1 root root  245 Dec  5 13:57 dh1024.pem

Creating the keys for the client

Later we need the keys for our client so let create them now too. The name of our Raspberry Pi will be alarmpi, so we use this name for key too. Again you have to confirm the last two questions with “y”.

./build-key AlArmPi

The next set of files was created


    -rw-r--r-- 1 root root 3870 Dec  5 14:08 AlArmPi.crt
    -rw-r--r-- 1 root root  704 Dec  5 14:08 AlArmPi.csr
    -rw------- 1 root root  912 Dec  5 14:08 AlArmPi.key


Again we copy the necessary files to a place with easier access in further steps

cp /etc/openvpn/easy-rsa/2.0/keys/ca.* /etc/openvpn/

cp /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem /etc/openvpn/

cp /etc/openvpn/easy-rsa/2.0/keys/OpenVpnServer.* /etc/openvpn/

Later we copy the client relevant stuff via scp to our Raspberry


Under /usr/share/doc/openvpn/examples/sample-config-files/ you can find a zipped configuration file for the server.

You can unzip it end use it as a template or documentation for the content we paste in the next step

gunzip -d /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz

But for now to make it easy, lets start with a simplified own one starting from the scratch

nano /etc/openvpn/server.conf

Paste the following content into it

port 1194
proto udp
dev tun
ca /etc/openvpn/ca.crt
cert /etc/openvpn/OpenVpnServer.crt
key /etc/openvpn/OpenVpnServer.key
dh /etc/openvpn/dh1024.pem
cipher BF-CBC
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS"
push "dhcp-option DNS"
keepalive 10 120
max-clients 10
user nobody
group nogroup
status openvpn-status.log
log /etc/openvpn/openvpn.log
verb 6

lets activate IP forwarding via

echo 1 > /proc/sys/net/ipv4/ip_forward

and modify the routes

iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE

Now we can start our OpenVPN server

openvpn /etc/openvpn/server.conf &

check the last line of the log file

tail /etc/openvpn/openvpn.log

if everything is fine you see the last words


    Initialization Sequence Completed


additionally you can check the existence of the new /etc/net/tun device. This is our device for the tunneled traffic.


But be aware it can take a while to see it!


    tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 
              inet addr:  P-t-P:  Mask:


Our OpenVPN server is now up and running and we can switch to the client configuration.


Preparing the client

For this tutorial the Raspberry Pi get a complete fresh Arch Linux for ARM distribution. Unfortunately the Arch Linux team provides no up to date image. So we use the latest one ( and let the pacman package manager make the update for us.

After burning the image an SD card and booting the RPi you can ssh that machine via

ssh root@alarmpi.local

It is important that server and client have around the same date and time. So first at all let’s set the clock of the system

timedatectl set-timezone Europe/London

timedatectl set-time "2014-12-05 20:02"

Be aware, Digital Ocean’s server run in UTC. So I try to use Europe/London to have the same time. For sure this is not the right way, but it works.

Otherwise I got errors by starting the service (SSL3_GET_SERVER_CERTIFICATE:certificate verify failed).


And next, let’s update the installation. This will download a lot (more than 100MB), because our image is from JUN 2014 and therefore a little bit outdated. Confirm all questions with “Y”.

pacman -Syu

and install openvpn

pacman -S openvpn

Yes, both sides – the server and the client – are using the same package. The used configuration during startup of the openvpn decides whether to act as a server or client.

Now check the availability of the client-side tun device

test ! -c /dev/net/tun && echo openvpn requires tun support || echo tun is available

You should get that output


    tun is available


Now we have to copy the key files from our server to the client. Again, use the IP of your server

scp root@ /etc/openvpn/

scp root@ /etc/openvpn/

scp root@ /etc/openvpn/

Now we have the necessary files on our Raspberry


    AlArmPi.crt          100% 3891     3.8KB/s   00:01   
    AlArmPi.csr          100%  708     0.7KB/s   00:00   
    AlArmPi.key          100%  920     0.9KB/s   00:00


On a Arch Linux system you can find the samples for client configuration files here:


But again we start from the scratch.

nano /etc/openvpn/client.conf

And paste these lines

dev tun
proto udp
remote 1194
resolv-retry infinite

ca /etc/openvpn/ca.crt
cert /etc/openvpn/AlArmPi.crt
key /etc/openvpn/AlArmPi.key
ns-cert-type server
verb 3
log /etc/openvpn/openvpn.log


Start the client-side of OpenVPN

openvpn /etc/openvpn/client.conf &

and check again the last lines of the log file

tail /etc/openvpn/openvpn.log

if everything is fine you see again these last words


    Initialization Sequence Completed


If you see this message


    You must define TUN/TAP device


reboot your Raspberry

shutdown -r now

Verify the configuration


With traceroute it is easy to see the hops of our communication. To use is we have first to install the package

pacman -S traceroute

and then we can trace our traffic with


This should produce something like


    traceroute to (, 30 hops max, 60 byte packets
     1 (  261.505 ms  
     2 (  262.644 ms  
     3 (  261.967 ms 
     4 (  262.589 ms 
     5 (  262.931 ms


Outside visible IP

There are some of these “what is my IP address” services available, which shows you the IP of you entry-point to the internet. Usually that is the IP your router got. But with tunneled traffic it should be the IP of our OpenVPN server – the IP of our Digital Ocean server. Let’s check this.

Therefore we install a console based browser.

pacman -S w3m

and then we check our outside visible IP with


In one of the first lines you see the interesting output


    Your IP:


This is exactly what we expect – the IP of our Digital Ocean’s server and not the IP of our internet provider.

Tuesday, November 25, 2014 #

ClearCenter offers an almost ready to go VirtualBox image:

With the first boot it provides a web based configuration GUI via URL

Per default it is not possible to access this URL from the host machine which runs the VirtualBox virtualization software. So it is necessary to forward the request to the virtualized guest (ClearOS).

Changing these settings doesn’t need a shutdown of the server, it could be made during a running guest system!

Select ClearOS machine > Settings > Network > Port Forwarding

I defined a forwarding of port 8181 of the host machine (here to port 81 (as provided from ClearOS) to the guest:
(check the Guest IP, it is written in the welcome screen)


Now it is possible to access the web based configuration tool via:


During my first installation of ClearOS the (web) installer hangs during the system update step. It downloaded a lot but it didn’t applied the updates because of the error “Exception: Didn't install any keys”.

The easiest workaround to get rid of this is a manual update on a console. Therefore I switched to virtual machine and opened second console with ALT + F2 and logged in as root. With the yum package manager this is a simple task, run the command

yum update

Now the systems downloaded in my case around 100MB and applied the updates. During this the moment came where I had to confirm something with yes. After a while the system is updated and I could switch back to the browser with the installer.

It was necessary to went back to the previous step and also to reload the page once. Being back to the upgrade step the system now stated, that it is up to date and let me went to the next step..

Thursday, November 20, 2014 #

During updating/upgrading a fresh Moebius installation on a RaspberryPi with
apt-get update && apt-get upgrade
it brought this question:
Configuration file `/etc/issue'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** issue (Y/I/N/O/D/Z) [default=N] ? 
The default hint not to do that is mainly for longer running installations to avoid an overwriting of existing configuration files. In case of a fresh installation you should confirm with "Y".

Tuesday, June 4, 2013 #


  1. Create KML file
  2. Save it to your public Dropbox folder
  3. Copy the public link
  4. Paste this link in the search field of Google Maps


Enjoy your KML data in Google Maps