Waclaw Chrabaszcz
... there is no spoon ...

APPSENCE – Unattended installation of AppSense Desktop Now

What is AppSense? Espacially what is AppSense for me? I can call myself a GPO fundamentalist. And in my opinion I can reach everything using Group Polices. However the role of the engineer it to know, not to believe. The role of the engineer is to be open for everything. So, next couple of months I will study AppSense way of user management. I will try to be objective, and focus of advantages in comparison to our good old GPO way.

What are strong points of AppSense:

  • Streaming user settings on-demand in response to user actions, reducing network traffic and logon times.
  • Simple migration of user profile data and settings between desktops, operating systems and delivery systems.
  • Personalization settings are managed and streamed at application level enabling independent management of user profile data on a per application basis and allowing applications to be upgraded and swapped with no user impact.
  • Assist migration of users to Microsoft Windows 7.
  • Automatic management of user application personalization.
  • Remove the potential for profile corruption.
  • Enable consistent quality of service to the user regardless of the environment delivery mechanism.
  • Manage personalization settings across distributed server silos.
  • Simple migration from existing profiles.
  • Malicious or accidental user environment changes can be automatically self-healed.
  • Minimize support costs and maximize user productivity.
  • Apply user policy dynamically in any desktop delivery mechanism.
  • Ensure users remain compliant with policy regardless of how they receive their working environment.
  • Quickly implement business policies which can be shared and utilized across operating system boundaries and different application delivery mechanisms by use of triggers, actions and conditions.
  • Introduce pre-built corporate policy best practice with AppSense Policy Templates.

If we talk about the architecture AppSense is typical Agent Management app. You need to Deploy Agents to all your clients (like for SCCM, or App-V) to start manage the environment on your new terms. As it is shown on the picture there are three layers of the infrastructure:

Consoles – can be installed on management servers or Admin's workstation, each major component has his own separate console. Management console needs to be hooked into Management Server

Management Servers – stores configurations and personalization in SQL Databases, thanks to this concept we can reach much better policy granularity, and AppSense is OU independentJ. To manage global and local load you can build multiple instances of management servers and use any of known you NLB techniques to address your needs. I would say that from the maintenance perspectives AppSense a typical multitier Web Application: NLB WebServers + Clustered SQL.

Agents – using BITS protocol agent queries Management Server for Polices and Personalization, in the same way agent sends visualized registry changes to store in the SQL. Thanks to that Personalization can roam from one machine to the other, and if it is our wish, between platforms (client Win XP/7/8 <->server 2008/2012).

Ok After this short background it is time to install evaluation version of AppSense. To obtain AppSense sources you need to go to https://www.myappsense.com/ and request myAppSense accout. You will have to provide your corporate data, usually requests from public space e.g. @gmail @hotmail are ignored. It might be a problem for students. It looks like AppSense is focused on corporate market only.

Once you've got binaries you can start the installation. I would recommend you manual installation, it is nice and easy. The Installation wizard helps you to resolve all prerequisite issues with one click. However I am an automation maniac. So I will do it this way: (I'm using Server 2008 R2)

rem @echo off if you like

REM installation source
SET APPSENSESOURCE=C:\AppSenseDesktopNow8Dec2014

REM prerequisites
ServermanagerCMD -install NET-Framework-Core
ServermanagerCMD -install Web-Server
ServermanagerCMD -install Web-Asp-Net
ServermanagerCMD -install BITS
start /wait %APPSENSESOURCE%\Software\Prerequisites\vcredist2010_SP1_x64.exe /quiet /norestart
start /wait %APPSENSESOURCE%\Software\Prerequisites\vcredist2010_SP1_x86.exe /quiet /norestart
start /wait %APPSENSESOURCE%\Software\Prerequisites\vcredist2013_x64.exe /quiet /norestart
start /wait %APPSENSESOURCE%\Software\Prerequisites\vcredist2013_x86.exe /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Prerequisites\msxml6.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Prerequisites\msxml6_x64.msi /quiet /norestart
start /wait %APPSENSESOURCE%\Software\Prerequisites\dotNetFx40_Full_x86_x64.exe /quiet /norestart

REM appsense
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ApplicationManagerConsole64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\ApplicationManagerConsole64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ApplicationManagerDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ApplicationManagerWebServices64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\ApplicationManagerWebServices64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ConfigurationAssistant64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerConsole64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\EnvironmentManagerConsole64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerPolicyTools64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerPolicyTools64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\EnvironmentManagerTools64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\EnvironmentManagerTools64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\LicensingConsole64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\LicensingConsoleDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ManagementCenterDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ManagementConsole64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\ManagementConsole64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ManagementServer64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\ManagementServer64.msp /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\PerformanceManagerConsole64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\PerformanceManagerDocumentation64.msi /quiet /norestart
start /wait msiexec /i %APPSENSESOURCE%\Software\Products\PersonalizationServer64.msi /quiet /norestart
start /wait msiexec /p %APPSENSESOURCE%\Software\Products\PersonalizationServer64.msp /quiet /norestart

The process of building of the environment is divided into two parts, the installation and the configuration. Next month we will configure Management Servers and deploy Agents.



Server 2012 and PowerShell remoting – How to create a failover cluster without leaving your desk(top)

Who's the king? Of course PowerShell. Let's try to use its beauty to create 3 node failover cluster. It's nothing unusual, where is this beauty? In remoting of PowerShell scripts. RDP is for noobs. Geeks(withBlogs) do such tasks this way:

# Install Failover Clustering and File Server Roles
Invoke-Command -ComputerName Server1, Server2, Server3 -ScriptBlock
{
Get-WindowsFeature *Cluster* | Add-WindowsFeature –includeManagementTools
Get-WindowsFeature *File-Services* | Add-WindowsFeature –includeManagementTools
Get-WindowsFeature *FS-FileServer* | Add-WindowsFeature –includeManagementTools
}

# Configure the iSCSI Targert with Initiator ID's
Set-IscsiServerTarget -ComputerName DC -TargetName Contoso-ISCSI-SAN -InitiatorIds `
"Iqn:iqn.1991-05.com.microsoft:serverl.contoso.com", `
"Iqn:iqn.1991-05.com.microsoft:server2.contoso.com", `
"Iqn:iqn.1991-05.com.microsoft:server3.contoso.com"

# Connect iSCSI Targets
Invoke-Command -ComputerName Server1, Server2, Server3 -ScriptBlock
{
New-IscsiTargetPortal -TargetPortalAddress 10.0.0.1
Connect-IscsiTarget -NodeAddress (Get-IscsiTarget).NodeAddress -IsPersistent:$True
}

# set physical iSCSI Disks to Online
Invoke-Command -ComputerName Server1 -ScriptBlock {
$Disks=Get-Disk | where isOffline
foreach ($Disk in $Disks) {
Set-Disk -Number $disk.Number -IsReadOnly 0
Set-Disk -number $disk.Number -IsOffline 0
Initialize-Disk $disk.Number -PartitionStyle MBR
$Part = New-Partition -DiskNumber $disk.Number -UseMaximumSize -AssignDriveLetter
Initialize-Volume -DriveLetter $part.DriveLetter -FileSystem NTFS -Confirm:$False
}

# Loop through volumes and catch format failures, then retry.
do {
[Array]$Volumes = Get-Volume | Where FileSystem -eq ''
Foreach ($vol in $Volumes) {
Initialize-Volume -DriveLetter $Vol.DriveLetter -FileSystem NTFS -Confirm:$False
}
} while ($Volumes.Count -gt 0)
}

# create the cluster
Invoke-Command -ComputerName Server1 -ScriptBlock {
Test-Cluster –Node Server1, Server2, Server3
New-Cluster -Name MyCluster -Node Server1, Server2, Server3
#let's create a ScaleOutFileServer
Get-ClusterAvailableDisk | ?{ $_.ScsiAddress -eq 50331651 } | Add-ClusterDisk
Add-ClusterSharedVolume "Cluster Disk 2"
#Add-ClusterFileServerRole -Storage "Cluster Disk 2" -Name myCluster
Add-ClusterScaleOutFileServerRole -Name "Cluster-SOFS"
Move-ClusterGroup -Name Cluster-SOFS -Node Server1
#$volumes = Get-Volume | where filesystem -eq CSVFS
md c:\ClusterStorage\Volume1\UserDocs
New-SmbShare -Name UserDocs -Path c:\ClusterStorage\Volume1\UserDocs -FullAccess Contoso\Administrator
}

Let's check results ….



Server 2012 R2 – Storage Spaces – A Software Defined Storage by Microsoft

You know ... when I was young (bear in mountainsJ) the server was meaning a file server and Novell as well. In 2000 Microsoft released Server 2000, Active Directory and have changed the world once forever. We will never forget year 2002 when Windows XP was born and of course Server 2003. It was great time to participate in this revolution. We were so proud doing migrations to XP/2003 world. Once it was done we choose to rest, not we … Microsoft did it.

It's funny but what you should be aware in live is … the success. Slowly but constantly you become lazy and you missing opportunities. Something similar happened once Microsoft became Macrosoft. Almost drunk with huge success they made few strange decisions (e.g.Vista) and they left a lot of free space in their origin technology – a file server.

More and more dedicated storage solutions came to market and took over files. Of course SAN (Storage Area Network) it is a backbone for Windows Servers, but NAS (Network Attached Storage) don't need any Windows. Why we choose NAS - due to two factors.

Let's take a look on your personal files. WHAT??!! I've got 100 MB of files??!! I thought I've got my CV only … and few things from my previous computer. I copied these 3 years ago and someday I will review them. Multiply this story by 2000 and we've got typical file server. On typical file server about 20% of stored files are active, but most of them will never be open. More over users stores thousands of copies of the same document on various paths. More over many documents .e.g. credit agreements are equal in 99% of the content except personal data. Why not to store only one copy of the file for plenty users and locations. What's more why not to split files in blocks, and store equal blocks only once.

Btw. These pictures comes from Microsoft Technet. So in well optimized world each user has his own metadata e.g. different file attributes, a small chunk of data combined with pointers to shared or private file blocks. If file2 updates block A, it will become private, but B and C is still shared. Of course the "compression ratio" is growing with the number of copies of the file. Unfortunately data deduplication consumes lot of CPU power and disk IOps, however we don't have to run it on the fly, we can schedule it overnight.

But there is one more thing. We know that some of files are active and open day by day by many users. We would like to guarantee fast access to them, where us most of files we would like to store as cheap as possible. If we attach fast SSD disk as D:\ and slow SATA hdd as E:\ we can move "hot" files to D:\. However in few months some other files will become hot, and many of our files on D will cool down. If we move files from one drive to the other, users will be confused were their files are. All we need is the virtualization of physical disks. Users see only Virtualized layer and access files just on one mapped drive, they don't care about our activities. But we under the virtualization layer can move files between fast SSD and slow HDD disks. It is called the data tiering.

Of course NAS solutions offers data deduplication and tiering, and we quickly found these advantages as the key for file storage success. What? The success once again? Does it mean we should be careful again? Yes indeed. To offer good deduplication and tiering NAS needs Storage Processors and well protected by copyrights storage software. Moreover, this shy, silent new guy in a corner. He's a Storage Admin. No one knows what he's doing, and he have no idea how to fix Sharepoint, SQL server and other end user related technologies. Looking for reduction of operational costs we need something cheaper but easy to manage for regular WinAdmin. The answer is JBOD – Just Bunch Of Disks. A pure disk rack without Storage Processor, advanced Storage Software. And your Windows 2012 R2 (file) Server thanks to Storage Spaces can take care of data deduplication and tiering.

Storage Spaces are a virtualization layer for physical disks, we can combine multiple disks into one volume and manage them under the hood. We can mirror data if we need data protection, combine fast/slow, cheap/expensive, 7200/15k/SSD disk in various combinations including data tiering. We can even set HotSpare disks. Storage spaces can move hot data to fast tier automatically, or we can manually assign particular files/folders to particular tier.

Data deduplication can be enabled on the volume lever. On typical file server with user's docs you should save about 50% of space. On VHD library you can gain even 80%. Yes that's huge number. All you need is Windows Server 2012 R2. So ... let's do it!

#enumerate physical disks
Get-PhysicalDisk | Format-Table FriendlyName, MediaType, CanPool
$PhysicalDisks = (Get-PhysicalDisk -CanPool $True)
Get-StorageSubSystem

#disk virtualization and tiering
New-StoragePool -FriendlyName MyStoragePool -StorageSubsystemFriendlyName "Storage Spaces*" -PhysicalDisks $PhysicalDisks
Update-StoragePool -Name MyStoragePool -WhatIf:$True

#assign disks to proper tiers
Get-PhysicalDisk
2..5 | % {Set-PhysicalDisk -FriendlyName PhysicalDisk$_ -MediaType HDD}
6..8 | % {Set-PhysicalDisk -FriendlyName PhysicalDisk$_ -MediaType SSD}

#create fast and slow tier
New-StorageTier -StoragePoolFriendlyName myStoragePool -FriendlyName "SSDdrives" -MediaType SSD
New-StorageTier -StoragePoolFriendlyName myStoragePool -FriendlyName "HDDdrives" -MediaType HDD
$sdd = Get-StorageTier -FriendlyName "SSD*"
$HDD = Get-StorageTier -FriendlyName "HDD*"

#create software defined storage
New-VirtualDisk -StoragePoolFriendlyName myStoragePool `
-FriendlyName "2tier" -ResiliencySettingName "Mirror" `
-storageTiers $ssd, $hdd -StorageTierSizes 50GB,100GB

#create volume on virtualized layer
New-Volume -StoragePoolFriendlyName "myStoragePool" -FriendlyName "UserData" -AccessPath "H:" `
-ResiliencySettingName "Mirror" -ProvisioningType "Fixed" -StorageTiers $ssd, $hdd `
-StorageTierSizes 50MB,100MB -FileSystem NTFS

#enable deduplication
Enable-DedupVolume -Volume H: -UsageType Default
Set-DedupVolume –Volume D: -MinimumFileAgeDays 0
Start-DedupJob H: -Type Optimization –Memory 50
Update-DedupStatus -Volume "H:"

#share a volume
md H:\UserDocs
New-SmbShare -Name UserDocs -Path H:\UserDocs -FullAccess Contoso\Administrator

#assign a file to fast tier
Set-FileStorageTier -DesiredStorageTier $sdd -FilePath "H:\UserDocs\SharedVM\Windows7.vhd"

But … there is one more thing … J

Microsoft is already working on the new release of Windows Server 10

  • Storage Replica – you will replicate volumes to other locations for DR proposes, it could be sync or async replication
  • Storage Quality of Service - you will set max and min disk performance for VMs hosted on Hyper-V, so your developers will not consume IOps required for production run.

But this is the story for another beer.



XenDesktop 7.5 – quick reporting and documentation in Excel

Boring reporting, #$%^&* reporting. No one likes it, but communication with managers is the part of our lives whether we like it or not … There are many ways how we dill with it. Today I would like to demonstrate how to get easy DesktopDirector's data and put them into Microsoft Excel.

Before we will do it, we need to download and install PowerPivot for Microsoft Excel 2010. It is an easy Next, Next installation. Once you are ready you can launch your Excel.

Go to PowerPivot tab, new Excel window will popup.

On the ribbon find From Data Feeds button, click it.

For Friendly Connection Name provide your site or broker name

For data feed url: http://{yourDeliveryControllerFQDN}/Citrix/Monitor/OData/v1/Data.

 

 

I would like to wish you Merry Christmas and Happy New Year. All the best for you and your families.

 

… BTW … for XenApp 7.x it works as well …



XenApp and XenDesktop 7.6 - connection leasing under the hood

Some time ago Citrix introduced XenDedexktop and XenApp version 7.6. One of key improvements, and the best one for me is the Connection Leasing. Lastly Citrix delivered any replacement for good old LHC (Local Host Cache). Since XD 7.0 to XenDasktop&XenApp 7.5 many admins refused to migrate into the most resent version due to lack of session sharing (all your apps within one ICA channel), session pre-launching ( 2000 users tries to logon at 9:00 AM J ) and lack of any database resilience mechanism. In old Xenapp 6.5 every servers (not in a worker mode) was able to store local copy of the data store, and in case of database absence, to use his local cache to launch requested application. Static data were stored in in this (small) database, where us all dynamic e.g. current load, were handled by Data Collectors.

After merging of XenApp with XenDesktop this situation became much more complicated. Tens of thousands VDI machines instead of hundreds of servers, tones of user to machine assignments and billions of state changes when machine is powered on or user just logged in. Database 7.x is very dynamic, because it plays the role not only of the DataStore but the DataCollector as well. Of course you can protect your database by SQL AlwaysOn, Mirroring, or Clustering, however even well protected database can collapse. And even if your database runs, your network might fails. In this case, all actives sessions remains, but no one new can launch the session, until the database will be back.

XenDesktop 7.6 introduces Connection Leasing … and XenApp it is just a different licensing model for the same product. Citrix not documented yet Connection Leasing, all we know is, that it is stored in some XML file. Connection Leasing is enabled by default, you can easy validate it using followed PoSH command:

If you would like to change default behavior you can disable/enable this feature:

Set-BrokerSite –ConnectionLeasingEnabled $false
Set-BrokerSite –ConnectionLeasingEnabled $true


Let's try to view current leases, we see two launch leases and one enumeration.


If we would like to update local data on demand we can execute:

Update-BrokerLocalLeaseCache

But where really this data is stored in case of database absence?

In hidden folder c:\ProgramData\Citrix\Broker\Cache you will find followed folders:
Apps
Desktops
Icons
Leases
Workers

Each of them stores some (pseudo-random?) folder and XML file with definitions of related Item. As an example I will present the Calc app.

Let's take a look on example enumeration of available resources for user on particular device (please keep in mind policy filters for the endpoint)

And an example application lease

We can find here when exactly this lease expires, after this date, in case of database failure, user will be unable to re-launch the app. Users and worker machine SID, based on it in case of DB failure user will be redirected to exactly the same worker, and I assume if any load evaluator is applied, there must be enough capacity for the new session. Session sharing key, yeah! Session sharing is really back! The last remaining question is: where is the information about the application delivered by this session. In my opinion it is hidden behind RSApps, but as you can see, there is no direct answer.



SCCM 2012 R2 and MDT – MDT Task Sequence

Today I'm going to create MDT Task Sequence in SCCM world. MDT offers advanced sequences, better prepared for customization and conditional installation. For example based on variables we can build accountant/developer specific workstation, with custom disk configuration, application sets, and much much more.

This time we will create a simple MDT Task Sequence to get familiar with MDT components. I would like to show as well few troubleshooting steps during the initial setup. I think it could be valuable for SCCM&MDT beginners.

  1. In the first step we will share MDT folder. We will create separate folder to demonstrate MDT package content and easy troubleshooting in the future.
  2. Let's create new MDT Task Sequence
  3. Client Task Sequence
  4. Task Sequence name
  5. We will join the domain
  6. We are not going to use this package for capturing new images
  7. Boot image – standard SCCM x64 boot
  8. In some specific situations you may be unable to create a new MDT package. Don't give up, we will fix it.
  9. troubleshooting: download and install/upgrade MDT 2013
    http://www.microsoft.com/en-us/download/confirmation.aspx?id=40796
  10. troubleshooting: Make sure your deployment share is created, more details how to configure it you can find in my previous post
  11. troubleshooting: Update Deployment Share
  12. troubleshooting: Let's optimize the boot image updating process

  13. Troubleshooting: After that unregister and register once again your MDT integration component in SCCM. Make sure your SCCM console is CLOSED during this operation
  14. Troubleshooting: ok, registration step – the wizard should detect your server site and the site code without issues
  15. Now the integration is fixed. You can start SCCM Admin Console and repeat steps since the beginning to the MDT package creation
    create the package using \\UNC-path to your MDT share, make subfolder for the package
  16. Package Details
  17. Image to Install – Windows 10 … not yet haha . As you can see, you can use this sequence for OS image creation.
  18. Of course Zero touch deployment
  19. Standard SCCM Client Package
  20. Standard USMT package
  21. MDT Settings package, once again we will store it on the MDT share in a separate subfolder
  22. Package details …. boring …. boring … boring
  23. Bye bye Windows XP and your SYSPREP, I will never forget you …
  24. Summary and go!
  25. Enjoy the progress bar
  26. Finish – Yuppie!!!

 

Let's check MDT packages content. There is something, and for MDT fanbois (if there are any) it looks familiar.

 

Ok Let's push this content to Distribution Points, coffee and we will try the installation.



SCCM, MDT and SCVMM – How to convert VHD into WIM

It is not possible. Due to obvious reasons, VHD is it a dynamic structure like a SQL database, whereas WIM is closer to ZIP file – very static and designed for conserving the storage space by compression and links to duplicated files.

What we can do else? We can capture VHD state to WIM. I'm going to perform this operation on Windows 7 machine, so unfortunately this time no PoSH commandlets like Mount-VHD. We will need imageX command, maybe you will need to download and install Windows AIK

http://www.microsoft.com/en-us/download/details.aspx?id=5753

Since we need to execute multiple DiskPart commands, we will have to rely on answer files:

 

diskpart /s attach.txt
imagex /compress maximum /check /scroll /boot /capture F: C:\TEMP\Win7.wim "Win7syspreped"
diskpart /s detach.txt

attach.txt

select vdisk file="C:\TEMP\Win7.vhd"
attach vdisk

detach.txt

select vdisk file="C:\TEMP\Win7.vhd"
detach vdisk

Now you can compare file sizes J



PowerShell – Sexy PoSH console

PowerShell don't has to be boring.

Download this module: http://www.powertheshell.com/download/modules/PTSAeroConsole.zip

Extract it into C:\Windows\System32\WindowsPowerShell\v1.0\Modules
(you may need to enable both module files by right clicking | Prosperities |Unlock)

Now you can start PowerShell window:

 

Import-Module PTSAeroConsole
Enable-AeroGlassTheme
Disable-AeroGlassTheme

 

Enjoy!



AZURE - Stairway To Heaven

 

Before you’ll start reading please start to play this song.

 

OK boys and girls, time get familiar with clouds. Time to become a meteorologist. To be honest I don’t know how to start. Is cloud better or worse than on campus resources … hmm … it is just different. I think for successful adoption in cloud world IT Dinosaurs need to forget some “Private Cloud” virtualization bad habits, and learn new way of thinking.

Take a look:

- I don’t need any  tapes or  CDs  (Physical Kingdom of Windows XP and 2000)

- I don’t need any locally stored MP3s (CD virtualization :-)

- I can just stream music to your computer no matter whether my on-site infrastructure is powered on.

Why not to do exactly the same with WebServer, SQL, or just rented for a while Windows server ? Let’s go, to the other side of the mirror. 1st  - register yourself for free one month trial, as happy MSDN subscriber you’ve got monthly budget to spent. In addition in default setting your limit protects you against loosing real money, if your toys will consume too much traffic and space.

http://azure.microsoft.com/en-us/pricing/free-trial/

Once your account is ready forget WebPortal, we are PowerShell knights.

http://go.microsoft.com/?linkid=9811175&clcid=0x409

#Authenticate yourself in Azure
Add-AzureAccount

#download once your settings file
Get-AzurePublishSettingsFile

#Import it to your PowerShell Module
Import-AzurePublishSettingsFile "C:\Azure\[filename].publishsettings"

#validation
Get-AzureAccount
Get-AzureSubscription

#where are Azure datacenters
Get-AzureLocation

#You will need it Smile
Update-Help

#storage account is related to physical location, there are two datacenters on each continent, try nearest to you
# all your VMs will store VHD files on your storage account
#your storage account must be unique globally, so I assume that words account or server are already used
New-AzureStorageAccount -StorageAccountName "[YOUR_STORAGE_ACCOUNT]" -Label "AzureTwo" -Location "West Europe"
Get-AzureStorageAccount

#it looks like you are ready to deploy first VM, what templates we can use
Get-AzureVMImage | Select ImageName

#what a mess, let’s choose Server 2012
$ImageName = (Get-AzureVMImage)[74].ImageName

$cloudSvcName = '[YOUR_STORAGE_ACCOUNT]'
$AdminUsername = "[YOUR-ADMIN]"
$adminPassword = '[YOUR_PA$$W0RD]'
$MediaLocation = "West Europe"

$vmnameDC = 'DC01'


#burn baby burn !!!
$vmDC01 = New-AzureVMConfig -Name $vmnameDC -InstanceSize "Small" -ImageName $ImageName   `
    | Add-AzureProvisioningConfig -Windows -Password $adminPassword -AdminUsername $AdminUsername   `
    | New-AzureVM -ServiceName $cloudSvcName

#ice, ice baby …
Get-AzureVM
Get-AzureRemoteDesktopFile -ServiceName "[YOUR_STORAGE_ACCOUNT]" -Name "DC01" -LocalPath "c:\AZURE\DC01.rdp"

As you can see it is not just a new-VM, you need to associate your VM with AzureVMConfig (it sets your template), AzureProvisioningConfig (it sets your customizations), and Storage account. In next releases you’ll need to put this machine in specific subnet, attach a HDD and many more. After second reading I found that I am using the same name for STORAGE and SERVICE account, please be aware of it if you need to split these values.

Conclusions:
- pipe rules !
- at the beginning it is hard to change your mind and agree with fact that it is easier to remove and recreate a VM than move it to different subnet Smile
- by default everything is firewalled, limited access to DNS, but NATed outside on custom ports. It is good to check these translations sometimes on the webportal.
- if you remove your VMs your harddrives remains on storage and MS will charge you Smile. Remove-AzureVM -DeleteVHD

For me AZURE it is a lot of fun, once again I can be newbie and learn every page. For me Azure offers real freedom in deployment of VMs without arguing with NetAdmins, WinAdmins, DBAs, PMs and other Change Managers. Unfortunately soon or later they will come to my haven and change it into …

 



PowerShell – duplicated files in Windows Media Player library

Holiday! .. But why it's raining. Let's cleanup some duplicated MP3s. Maybe the rain will stop in the meantime.

For sure this code is not optimized, and I am not recommending anyone to use it. If you uncomment move actions you can reduce number of duplicated media files in your Windows Media Player library.

You can consider it as an example how to access and browse WMP using PowerShell.

 

$wmp = New-object –COM WMPlayer.OCX
$playlist = $wmp.mediaCollection.getAll()
$i=1
do {
if ($playlist.item($i).sourceURL -like "*.mp3")
{
if ($playlist.item($i).name -eq $playlist.item($i-1).name)
{
write-host "n-1 : " $playlist.item($i-1).sourceURL
write-host $playlist.item($i).name " : " $playlist.item($i).sourceURL
if ($playlist.item($i).sourceURL.tostring().length -gt $playlist.item($i-1).sourceURL.tostring().length )
{
Write-Host -ForegroundColor yellow "moving " $playlist.item($i).sourceURL
#Move-Item $playlist.item($i).sourceURL "c:\output"
}
else
{
Write-Host -ForegroundColor red "moving " $playlist.item($i-1).sourceURL
#Move-Item $playlist.item($i-1).sourceURL "c:\output"
}
}
}
$i++
}
while ($i -le ($playlist.count -1))

Ahh .. A quick description, you've got media files in many folders. The script checks WMP library and in case of conflict wins the file with shorter URLpath. To move files uncomment #Move-Item. And let WMP rebuild library before next script run. It may take up to 3 days.