Waclaw Chrabaszcz
... there is no spoon ...

Linux & VMware player – Maybe one day we’ll be united

Another quick post inspired just by live. I just remind myself about VMware Player in Unity mode. In pure Windows ecosystem it maybe not makes any sense. But Office 2013 on Linux is a dream of many people. So a very simple and old trick to present applications executed on virtual machine and how much fun! It is really nice to enjoy Compiz experience mixed with MS office and PowerShell ISE. OK, time to build some Domain Controller …


PowerShell – How to display multicolor messages

A quick post … I'm very busy. I'm scripting right now and I would like to display "green and red messages" to let my colleagues easy interpret script outputs. Take a look on this simple trick:

$server = "server1"
Write-Host "Testing $server connection ... " -NoNewline
if (Test-Connection $server -Quiet) {Write-Host "OK" -ForegroundColor Green }
else {Write-Host "Failed" -ForegroundColor Red }

All magic is hidden behind –noNewLine parameter


Hyper-V & PowerShell – How to create a VM with a differential disk

How many times you have to create a VM to test something? How many times you have to watch OS installation progress. Enough! All you need is to create a differential disk to your golden image and spin it. There is another advantage of differential disk. You can create a pseudo strip and keep master image on one drive (SSD) and differential disks on other hard drives. I like this setup for my R&D games. It saves a lot of storage and improves VMs performance. Ad of course I can repeat something again, again and again …

$myVM = "SQL2012"
$myNetwork = "Internal"
$myVHDpath = "D:\Hyper-V\$myVM\$myVM.vhdx"
$myParentpath = "C:\Users\Public\Documents\Hyper-V\Virtual hard disks\Server2008.vhdx"

$myVHDX = New-VHD -Path $myVHDpath -ParentPath $myParentPath -Differencing
$myVM = New-VM -Name $myVM -MemoryStartupBytes 1GB -VHDPath $myVHDpath
$myVM | Get-VMNetworkAdapter | Connect-VMNetworkAdapter -SwitchName $myNetwork

EASTER EGG - My Ubuntu Gnome desktop

Happy Easter! It is time for some unexpected gift!

Your desktop it is something unique. You desktop it is something very personal. Today I will try to show you how to make your workplace (workspace) a place where you want to spend as much time as you can. Another windows theme? Nooo! Think different, use the same MacOS as everyone? Nooo! Because I am not persuaded to the direction where Windows is going, for about two years I use more and more Linux. It's a lot of fun and it is not for sure a full replacement for Windows so far.

Described below knowledge it is a typical noob's cry. I am, and I will always be a Windows/PowerShell engineer.

Ok let's make few customizations on Ubuntu 14.04 to make it more personal. First we need to install few apps:

Wallch - it is a wallpaper rotor. It can replace you wallpaper every 30 min. I'm using dropbox to sync wallpaper gallery across Windows, Linux and Mac

Conky - it is a nice tool to display performance information on your desktop. You will need to spend some time on configuration, but it is easy to find all required information on the Internet

Cairo - Apple style dockbar. Usually I'm working with two monitors, on one I can use Unity dock, on the second Cairo.

Gnome Tweak Tool - allows to customize mouse pointer, windows themes, and what I like the most - apply custom icon theme.

Compiz - Compiz means desktop box :) I like wobbly windows and a little bit of app window transparency. Transparency is cool as long you don't need to watch a movie or do some graphics designing stuff. I attached my Compiz config file with all my lovely effects and required exceptions (VLC, full screen YouTube, GIMP). Of course compiz must be extended with EXPERIMENTAL effects. Experimental means don't expect it will always work, but expect a lot of fun!

To install all required components please execute below script AS REGULAR user, some operations should be executed on your profile, it will SUDO when it is required

sudo apt-get install git -y
sudo apt-get install wallch -y
sudo apt-get install conky-all -y
sudo apt-get install cairo-dock -y
sudo apt-get install cairo-dock-plug-ins -y
sudo apt-get install gnome-tweak-tool -y
sudo apt-get install compiz -y
sudo apt-get install compizconfig-settings-manager -y
cd && git clone git://anongit.compiz.org/users/soreau/scripts compizexperimental
cd compizexperimental/
chmod +x compiz-addons
./compiz-addons install all
compiz -replace

After the installation a reboot is recommended (compiz may hang :). And let's continue our game. It would be nice to auto start some apps and implement sweet customization. Under this link https://www.dropbox.com/sh/pclluq79xi1qumz/AAATwaGmJ0p3Ab7wa2yvkg6ka?dl=0 you'll find all required files. If you are new to dropbox and you use this link to create your account and download dropbox app, you'll receive 0.5 GB of my referral bonus.




Application startup definitions for Wallch, Conky and Cairo. You can copy some or all files into ~/.config/autostart/ if you don't want to edit Startup Applications manually. There are few traps btw, and some apps requires some delay before starting, so I would recommend you to copy files.


All files required to launch Conky. I lost control what I've got there but you can find there few nice config files. Copy to:




My personal unique not published anywhere else icon set. Put it into hidden .icons folder and use Gnome Tweak tool to change your theme.



Compiz Configuration Manager config file with all my customizations and transparency exceptions. In CCSM click import profile button and point on my config file.


One of my awesome wallpapers. Just for start …


Ah... It's obvious ~ means your home folder /home/{your_login}/

APPSENCE – Unattended installation of AppSense Desktop Now

What is AppSense? Espacially what is AppSense for me? I can call myself a GPO fundamentalist. And in my opinion I can reach everything using Group Polices. However the role of the engineer it to know, not to believe. The role of the engineer is to be open for everything. So, next couple of months I will study AppSense way of user management. I will try to be objective, and focus of advantages in comparison to our good old GPO way.

What are strong points of AppSense:

  • Streaming user settings on-demand in response to user actions, reducing network traffic and logon times.
  • Simple migration of user profile data and settings between desktops, operating systems and delivery systems.
  • Personalization settings are managed and streamed at application level enabling independent management of user profile data on a per application basis and allowing applications to be upgraded and swapped with no user impact.
  • Assist migration of users to Microsoft Windows 7.
  • Automatic management of user application personalization.
  • Remove the potential for profile corruption.
  • Enable consistent quality of service to the user regardless of the environment delivery mechanism.
  • Manage personalization settings across distributed server silos.
  • Simple migration from existing profiles.
  • Malicious or accidental user environment changes can be automatically self-healed.
  • Minimize support costs and maximize user productivity.
  • Apply user policy dynamically in any desktop delivery mechanism.
  • Ensure users remain compliant with policy regardless of how they receive their working environment.
  • Quickly implement business policies which can be shared and utilized across operating system boundaries and different application delivery mechanisms by use of triggers, actions and conditions.
  • Introduce pre-built corporate policy best practice with AppSense Policy Templates.

If we talk about the architecture AppSense is typical Agent Management app. You need to Deploy Agents to all your clients (like for SCCM, or App-V) to start manage the environment on your new terms. As it is shown on the picture there are three layers of the infrastructure:

Consoles – can be installed on management servers or Admin's workstation, each major component has his own separate console. Management console needs to be hooked into Management Server

Management Servers – stores configurations and personalization in SQL Databases, thanks to this concept we can reach much better policy granularity, and AppSense is OU independentJ. To manage global and local load you can build multiple instances of management servers and use any of known you NLB techniques to address your needs. I would say that from the maintenance perspectives AppSense a typical multitier Web Application: NLB WebServers + Clustered SQL.

Agents – using BITS protocol agent queries Management Server for Polices and Personalization, in the same way agent sends visualized registry changes to store in the SQL. Thanks to that Personalization can roam from one machine to the other, and if it is our wish, between platforms (client Win XP/7/8 <->server 2008/2012).

Ok After this short background it is time to install evaluation version of AppSense. To obtain AppSense sources you need to go to https://www.myappsense.com/ and request myAppSense accout. You will have to provide your corporate data, usually requests from public space e.g. @gmail @hotmail are ignored. It might be a problem for students. It looks like AppSense is focused on corporate market only.

Once you've got binaries you can start the installation. I would recommend you manual installation, it is nice and easy. The Installation wizard helps you to resolve all prerequisite issues with one click. However I am an automation maniac. So I will do it this way: (I'm using Server 2008 R2)

rem @echo off if you like

REM installation source
SET APPSENSESOURCE=C:\AppSenseDesktopNow8Dec2014

REM prerequisites
ServermanagerCMD -install NET-Framework-Core
ServermanagerCMD -install Web-Server
ServermanagerCMD -install Web-Asp-Net
ServermanagerCMD -install BITS
start /wait %APPSENSESOURCE%\Software\Prerequisites\vcredist2010_SP1_x64.exe /quiet /norestart
start /wait %APPSENSESOURCE%\Software\Prerequisites\vcredist2010_SP1_x86.exe /quiet /norestart
start /wait %APPSENSESOURCE%\Software\Prerequisites\vcredist2013_x64.exe /quiet /norestart
start /wait %APPSENSESOURCE%\Software\Prerequisites\vcredist2013_x86.exe /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Prerequisites\msxml6.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Prerequisites\msxml6_x64.msi /quiet /norestart
start /wait %APPSENSESOURCE%\Software\Prerequisites\dotNetFx40_Full_x86_x64.exe /quiet /norestart

REM appsense
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ApplicationManagerConsole64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\ApplicationManagerConsole64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ApplicationManagerDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ApplicationManagerWebServices64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\ApplicationManagerWebServices64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ConfigurationAssistant64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerConsole64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\EnvironmentManagerConsole64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerPolicyTools64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerPolicyTools64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerTools64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\EnvironmentManagerTools64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\LicensingConsole64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\LicensingConsoleDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ManagementCenterDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ManagementConsole64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\ManagementConsole64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ManagementServer64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ManagementServer64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\PerformanceManagerConsole64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\PerformanceManagerDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\PersonalizationServer64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\PersonalizationServer64.msp /quiet /norestart

The process of building of the environment is divided into two parts, the installation and the configuration. Next month we will configure Management Servers and deploy Agents.

Server 2012 and PowerShell remoting – How to create a failover cluster without leaving your desk(top)

Who's the king? Of course PowerShell. Let's try to use its beauty to create 3 node failover cluster. It's nothing unusual, where is this beauty? In remoting of PowerShell scripts. RDP is for noobs. Geeks(withBlogs) do such tasks this way:

# Install Failover Clustering and File Server Roles
Invoke-Command -ComputerName Server1, Server2, Server3 -ScriptBlock
Get-WindowsFeature *Cluster* | Add-WindowsFeature –includeManagementTools
Get-WindowsFeature *File-Services* | Add-WindowsFeature –includeManagementTools
Get-WindowsFeature *FS-FileServer* | Add-WindowsFeature –includeManagementTools

# Configure the iSCSI Targert with Initiator ID's
Set-IscsiServerTarget -ComputerName DC -TargetName Contoso-ISCSI-SAN -InitiatorIds `
"Iqn:iqn.1991-05.com.microsoft:serverl.contoso.com", `
"Iqn:iqn.1991-05.com.microsoft:server2.contoso.com", `

# Connect iSCSI Targets
Invoke-Command -ComputerName Server1, Server2, Server3 -ScriptBlock
New-IscsiTargetPortal -TargetPortalAddress
Connect-IscsiTarget -NodeAddress (Get-IscsiTarget).NodeAddress -IsPersistent:$True

# set physical iSCSI Disks to Online
Invoke-Command -ComputerName Server1 -ScriptBlock {
$Disks=Get-Disk | where isOffline
foreach ($Disk in $Disks) {
Set-Disk -Number $disk.Number -IsReadOnly 0
Set-Disk -number $disk.Number -IsOffline 0
Initialize-Disk $disk.Number -PartitionStyle MBR
$Part = New-Partition -DiskNumber $disk.Number -UseMaximumSize -AssignDriveLetter
Initialize-Volume -DriveLetter $part.DriveLetter -FileSystem NTFS -Confirm:$False

# Loop through volumes and catch format failures, then retry.
do {
[Array]$Volumes = Get-Volume | Where FileSystem -eq ''
Foreach ($vol in $Volumes) {
Initialize-Volume -DriveLetter $Vol.DriveLetter -FileSystem NTFS -Confirm:$False
} while ($Volumes.Count -gt 0)

# create the cluster
Invoke-Command -ComputerName Server1 -ScriptBlock {
Test-Cluster –Node Server1, Server2, Server3
New-Cluster -Name MyCluster -Node Server1, Server2, Server3
#let's create a ScaleOutFileServer
Get-ClusterAvailableDisk | ?{ $_.ScsiAddress -eq 50331651 } | Add-ClusterDisk
Add-ClusterSharedVolume "Cluster Disk 2"
#Add-ClusterFileServerRole -Storage "Cluster Disk 2" -Name myCluster
Add-ClusterScaleOutFileServerRole -Name "Cluster-SOFS"
Move-ClusterGroup -Name Cluster-SOFS -Node Server1
#$volumes = Get-Volume | where filesystem -eq CSVFS
md c:\ClusterStorage\Volume1\UserDocs
New-SmbShare -Name UserDocs -Path c:\ClusterStorage\Volume1\UserDocs -FullAccess Contoso\Administrator

Let's check results ….

Server 2012 R2 – Storage Spaces – A Software Defined Storage by Microsoft

You know ... when I was young (bear in mountainsJ) the server was meaning a file server and Novell as well. In 2000 Microsoft released Server 2000, Active Directory and have changed the world once forever. We will never forget year 2002 when Windows XP was born and of course Server 2003. It was great time to participate in this revolution. We were so proud doing migrations to XP/2003 world. Once it was done we choose to rest, not we … Microsoft did it.

It's funny but what you should be aware in live is … the success. Slowly but constantly you become lazy and you missing opportunities. Something similar happened once Microsoft became Macrosoft. Almost drunk with huge success they made few strange decisions (e.g.Vista) and they left a lot of free space in their origin technology – a file server.

More and more dedicated storage solutions came to market and took over files. Of course SAN (Storage Area Network) it is a backbone for Windows Servers, but NAS (Network Attached Storage) don't need any Windows. Why we choose NAS - due to two factors.

Let's take a look on your personal files. WHAT??!! I've got 100 MB of files??!! I thought I've got my CV only … and few things from my previous computer. I copied these 3 years ago and someday I will review them. Multiply this story by 2000 and we've got typical file server. On typical file server about 20% of stored files are active, but most of them will never be open. More over users stores thousands of copies of the same document on various paths. More over many documents .e.g. credit agreements are equal in 99% of the content except personal data. Why not to store only one copy of the file for plenty users and locations. What's more why not to split files in blocks, and store equal blocks only once.

Btw. These pictures comes from Microsoft Technet. So in well optimized world each user has his own metadata e.g. different file attributes, a small chunk of data combined with pointers to shared or private file blocks. If file2 updates block A, it will become private, but B and C is still shared. Of course the "compression ratio" is growing with the number of copies of the file. Unfortunately data deduplication consumes lot of CPU power and disk IOps, however we don't have to run it on the fly, we can schedule it overnight.

But there is one more thing. We know that some of files are active and open day by day by many users. We would like to guarantee fast access to them, where us most of files we would like to store as cheap as possible. If we attach fast SSD disk as D:\ and slow SATA hdd as E:\ we can move "hot" files to D:\. However in few months some other files will become hot, and many of our files on D will cool down. If we move files from one drive to the other, users will be confused were their files are. All we need is the virtualization of physical disks. Users see only Virtualized layer and access files just on one mapped drive, they don't care about our activities. But we under the virtualization layer can move files between fast SSD and slow HDD disks. It is called the data tiering.

Of course NAS solutions offers data deduplication and tiering, and we quickly found these advantages as the key for file storage success. What? The success once again? Does it mean we should be careful again? Yes indeed. To offer good deduplication and tiering NAS needs Storage Processors and well protected by copyrights storage software. Moreover, this shy, silent new guy in a corner. He's a Storage Admin. No one knows what he's doing, and he have no idea how to fix Sharepoint, SQL server and other end user related technologies. Looking for reduction of operational costs we need something cheaper but easy to manage for regular WinAdmin. The answer is JBOD – Just Bunch Of Disks. A pure disk rack without Storage Processor, advanced Storage Software. And your Windows 2012 R2 (file) Server thanks to Storage Spaces can take care of data deduplication and tiering.

Storage Spaces are a virtualization layer for physical disks, we can combine multiple disks into one volume and manage them under the hood. We can mirror data if we need data protection, combine fast/slow, cheap/expensive, 7200/15k/SSD disk in various combinations including data tiering. We can even set HotSpare disks. Storage spaces can move hot data to fast tier automatically, or we can manually assign particular files/folders to particular tier.

Data deduplication can be enabled on the volume lever. On typical file server with user's docs you should save about 50% of space. On VHD library you can gain even 80%. Yes that's huge number. All you need is Windows Server 2012 R2. So ... let's do it!

#enumerate physical disks
Get-PhysicalDisk | Format-Table FriendlyName, MediaType, CanPool
$PhysicalDisks = (Get-PhysicalDisk -CanPool $True)

#disk virtualization and tiering
New-StoragePool -FriendlyName MyStoragePool -StorageSubsystemFriendlyName "Storage Spaces*" -PhysicalDisks $PhysicalDisks
Update-StoragePool -Name MyStoragePool -WhatIf:$True

#assign disks to proper tiers
2..5 | % {Set-PhysicalDisk -FriendlyName PhysicalDisk$_ -MediaType HDD}
6..8 | % {Set-PhysicalDisk -FriendlyName PhysicalDisk$_ -MediaType SSD}

#create fast and slow tier
New-StorageTier -StoragePoolFriendlyName myStoragePool -FriendlyName "SSDdrives" -MediaType SSD
New-StorageTier -StoragePoolFriendlyName myStoragePool -FriendlyName "HDDdrives" -MediaType HDD
$sdd = Get-StorageTier -FriendlyName "SSD*"
$HDD = Get-StorageTier -FriendlyName "HDD*"

#create software defined storage
New-VirtualDisk -StoragePoolFriendlyName myStoragePool `
-FriendlyName "2tier" -ResiliencySettingName "Mirror" `
-storageTiers $ssd, $hdd -StorageTierSizes 50GB,100GB

#create volume on virtualized layer
New-Volume -StoragePoolFriendlyName "myStoragePool" -FriendlyName "UserData" -AccessPath "H:" `
-ResiliencySettingName "Mirror" -ProvisioningType "Fixed" -StorageTiers $ssd, $hdd `
-StorageTierSizes 50MB,100MB -FileSystem NTFS

#enable deduplication
Enable-DedupVolume -Volume H: -UsageType Default
Set-DedupVolume –Volume D: -MinimumFileAgeDays 0
Start-DedupJob H: -Type Optimization –Memory 50
Update-DedupStatus -Volume "H:"

#share a volume
md H:\UserDocs
New-SmbShare -Name UserDocs -Path H:\UserDocs -FullAccess Contoso\Administrator

#assign a file to fast tier
Set-FileStorageTier -DesiredStorageTier $sdd -FilePath "H:\UserDocs\SharedVM\Windows7.vhd"

But … there is one more thing … J

Microsoft is already working on the new release of Windows Server 10

  • Storage Replica – you will replicate volumes to other locations for DR proposes, it could be sync or async replication
  • Storage Quality of Service - you will set max and min disk performance for VMs hosted on Hyper-V, so your developers will not consume IOps required for production run.

But this is the story for another beer.

XenDesktop 7.5 – quick reporting and documentation in Excel

Boring reporting, #$%^&* reporting. No one likes it, but communication with managers is the part of our lives whether we like it or not … There are many ways how we dill with it. Today I would like to demonstrate how to get easy DesktopDirector's data and put them into Microsoft Excel.

Before we will do it, we need to download and install PowerPivot for Microsoft Excel 2010. It is an easy Next, Next installation. Once you are ready you can launch your Excel.

Go to PowerPivot tab, new Excel window will popup.

On the ribbon find From Data Feeds button, click it.

For Friendly Connection Name provide your site or broker name

For data feed url: http://{yourDeliveryControllerFQDN}/Citrix/Monitor/OData/v1/Data.



I would like to wish you Merry Christmas and Happy New Year. All the best for you and your families.


… BTW … for XenApp 7.x it works as well …

XenApp and XenDesktop 7.6 - connection leasing under the hood

Some time ago Citrix introduced XenDedexktop and XenApp version 7.6. One of key improvements, and the best one for me is the Connection Leasing. Lastly Citrix delivered any replacement for good old LHC (Local Host Cache). Since XD 7.0 to XenDasktop&XenApp 7.5 many admins refused to migrate into the most resent version due to lack of session sharing (all your apps within one ICA channel), session pre-launching ( 2000 users tries to logon at 9:00 AM J ) and lack of any database resilience mechanism. In old Xenapp 6.5 every servers (not in a worker mode) was able to store local copy of the data store, and in case of database absence, to use his local cache to launch requested application. Static data were stored in in this (small) database, where us all dynamic e.g. current load, were handled by Data Collectors.

After merging of XenApp with XenDesktop this situation became much more complicated. Tens of thousands VDI machines instead of hundreds of servers, tones of user to machine assignments and billions of state changes when machine is powered on or user just logged in. Database 7.x is very dynamic, because it plays the role not only of the DataStore but the DataCollector as well. Of course you can protect your database by SQL AlwaysOn, Mirroring, or Clustering, however even well protected database can collapse. And even if your database runs, your network might fails. In this case, all actives sessions remains, but no one new can launch the session, until the database will be back.

XenDesktop 7.6 introduces Connection Leasing … and XenApp it is just a different licensing model for the same product. Citrix not documented yet Connection Leasing, all we know is, that it is stored in some XML file. Connection Leasing is enabled by default, you can easy validate it using followed PoSH command:

If you would like to change default behavior you can disable/enable this feature:

Set-BrokerSite –ConnectionLeasingEnabled $false
Set-BrokerSite –ConnectionLeasingEnabled $true

Let's try to view current leases, we see two launch leases and one enumeration.

If we would like to update local data on demand we can execute:


But where really this data is stored in case of database absence?

In hidden folder c:\ProgramData\Citrix\Broker\Cache you will find followed folders:

Each of them stores some (pseudo-random?) folder and XML file with definitions of related Item. As an example I will present the Calc app.

Let's take a look on example enumeration of available resources for user on particular device (please keep in mind policy filters for the endpoint)

And an example application lease

We can find here when exactly this lease expires, after this date, in case of database failure, user will be unable to re-launch the app. Users and worker machine SID, based on it in case of DB failure user will be redirected to exactly the same worker, and I assume if any load evaluator is applied, there must be enough capacity for the new session. Session sharing key, yeah! Session sharing is really back! The last remaining question is: where is the information about the application delivered by this session. In my opinion it is hidden behind RSApps, but as you can see, there is no direct answer.

SCCM 2012 R2 and MDT – MDT Task Sequence

Today I'm going to create MDT Task Sequence in SCCM world. MDT offers advanced sequences, better prepared for customization and conditional installation. For example based on variables we can build accountant/developer specific workstation, with custom disk configuration, application sets, and much much more.

This time we will create a simple MDT Task Sequence to get familiar with MDT components. I would like to show as well few troubleshooting steps during the initial setup. I think it could be valuable for SCCM&MDT beginners.

  1. In the first step we will share MDT folder. We will create separate folder to demonstrate MDT package content and easy troubleshooting in the future.
  2. Let's create new MDT Task Sequence
  3. Client Task Sequence
  4. Task Sequence name
  5. We will join the domain
  6. We are not going to use this package for capturing new images
  7. Boot image – standard SCCM x64 boot
  8. In some specific situations you may be unable to create a new MDT package. Don't give up, we will fix it.
  9. troubleshooting: download and install/upgrade MDT 2013
  10. troubleshooting: Make sure your deployment share is created, more details how to configure it you can find in my previous post
  11. troubleshooting: Update Deployment Share
  12. troubleshooting: Let's optimize the boot image updating process

  13. Troubleshooting: After that unregister and register once again your MDT integration component in SCCM. Make sure your SCCM console is CLOSED during this operation
  14. Troubleshooting: ok, registration step – the wizard should detect your server site and the site code without issues
  15. Now the integration is fixed. You can start SCCM Admin Console and repeat steps since the beginning to the MDT package creation
    create the package using \\UNC-path to your MDT share, make subfolder for the package
  16. Package Details
  17. Image to Install – Windows 10 … not yet haha . As you can see, you can use this sequence for OS image creation.
  18. Of course Zero touch deployment
  19. Standard SCCM Client Package
  20. Standard USMT package
  21. MDT Settings package, once again we will store it on the MDT share in a separate subfolder
  22. Package details …. boring …. boring … boring
  23. Bye bye Windows XP and your SYSPREP, I will never forget you …
  24. Summary and go!
  25. Enjoy the progress bar
  26. Finish – Yuppie!!!


Let's check MDT packages content. There is something, and for MDT fanbois (if there are any) it looks familiar.


Ok Let's push this content to Distribution Points, coffee and we will try the installation.