Pseudo Knowledge Base

Useful stuff I've collected... Enjoy.
posts - 26 , comments - 34 , trackbacks - 0

Friday, April 27, 2012

Configuration error with Enterprise Library 5.0

One part of an ISL solution resulted in the implementing Enterprise Library 5.0, using Unity, Logging and Configuration features for a BizTalk pipeline component.  The pipeline was built and tested with a BizTalk receive and send port and worked fine.  The configuration was initialized in the BTSNTSvc.exe.config file and redirected to a separate xml configuration file.  This pipeline component was then deployed out to a different BizTalk development server and the pipeline implemented in and ESB itinerary solution.  The result was a series of exceptions as shown below.


The ISL Instrumentation Component Experienced an Unexpected Exception.  Please check the Unity configuration source to determine why the tracing objects were not available.
 
Exception Message:
Value cannot be null.
Parameter name: section


This made no sense as we had entered the configuration and the ‘section’ mentioned in the exception message was correctly entered into the biztalk configuration file.  It took a while to identify the difference between the two development environments that was causing the problem.  The itinerary was initialized by a call to a WCF service.  This resulted in the process running in IIS and picking up the configuration from the services web config file.  Once the configuration was migrated problem solved.  Ultimately we decided to implement the configuration at the machine config level to make sure that all configuration files were covered.
NOTE: we were implementing a refactored and GAC’ed version of the Enterprise Library, so there is no risk that the machine config settings will impact other users of Enterprise Libraries.


Geordie

Posted On Friday, April 27, 2012 3:35 PM | Comments (0) | Filed Under [ BizTalk Coding ]

Friday, December 30, 2011

Installing UDDI 3 Server on a BizTalk Dev Box

I have just installed the UDDI 3 server on my BizTalk development environment.  All looked good until I tried to open the publish page on the web interface.  The page 'http://localhost/uddi/edit/frames.aspx' returned a page cannot be displayed error.  The same error occured when I tried to open the Subscribe and Coordinate pages.

After playing around with the configuration for a while I tracked the problem down to the page using https.  By connecting to the UDDI Service Console, right clicking on the UDDI in the left panel I was able to open the UDDI properties dialogue box.  The require SSL for publication requests can be disabled on the Security tab.

P.S.  Loading the UDDI Server with the Microsoft ESB Providers, tModels etc can be done using the "Microsoft.Practices.ESB.UDDIPublisher.exe" tool located at "C:\Program Files (x86)\Microsoft BizTalk ESB Toolkit 2.1\BinC:\Program Files (x86)\Microsoft BizTalk ESB Toolkit 2.1\Bin".  In my case I used windows authentication to load the data into the UDDI service.

Geordie

Posted On Friday, December 30, 2011 12:47 AM | Comments (1) | Filed Under [ BizTalk ]

Wednesday, October 26, 2011

Creating a Send Port that can Generate Multiple Flat File Formats

I recently had a need to create a send port that could output a flat file but I could not determine which flat file schema to use at design time. Or at least I didn’t want to.  I wanted to process multiple flat file formats with the same orchestration process and output with the same send port. I have my orchestration dynamically mapping the document to the appropriate flat file document schema. The generated orchestration message type is set to System.XmlDocument so the orchestration can handle the varying message schema types, but how to output them with a single port. 
The problem is with the pipeline. The port uses a custom pipeline that uses a flat file assembler to generate the flat file. In the configuration of the assembler the flat file document schema to be generated is specified. So how to dynamically set the pipeline… I have yet to find an answer to this problem (I’ll let you know when I do!).
There is an easier solution to this problem. It is very simple once you know how the assembler and for that matter the receive pipeline disassemblers work. When we specify the schema in the components we are basically telling the pipeline which schema to use to process the message. So what happens if we do not specify a schema?
This is the interesting part, especially for the assembler component. BizTalk already knows which schema to use. Using the schema type (the combination of TargetNameSpace#RootNodeName) form the message, BizTalk can ask the configuration database which schema to use. Therefore by not specifying a schema in the flat file assembler you create a pipeline that can assemble any flat file BizTalk has a schema for. Although this works well for the XML disassembler as well it is not so easy for flat files. Like most the other pipeline components the disassembler can contain multiple disassemblers but unlike the rest the message flows through differently. As the message hits the first disassembler it tries to process the message, if it fails, it will try the next disassembler. When is succeeds in identifying the message and applying the schema it ignores the rest of the disassembler components and carries on down the pipeline. All other multiple pipeline components are processed sequentially. This information is displayed in the pipeline designer, but I never realised the significance.
Note the benefit of specifying the schema is for performance. There is more overhead if the pipeline has to search for the schema to use.
Hope this helps
Geordie

Posted On Wednesday, October 26, 2011 12:46 AM | Comments (0) | Filed Under [ BizTalk ]

Thursday, September 1, 2011

NullReferenceException when calling the Retrieve method of the OrganizationServiceProxy object

I have recently started to update a very successful SAP CRM integration I originally built 5 or 6 years ago. We have recently started the implementation of CRM 2011 so the business has decided to take the opportunity to change the data that they want to synchronize between the 2 systems. Luckily the integration is both modular and to a large degree dynamic. The core logic should remain relatively untouched and only the SAP and CRM connection dlls should need any real work.
The changes to the SAP connector were simple and took a matter of a couple of hours. Partially because I still have the virtual development machine I used to create the SAP proxy code.
The Crm connector has been a differnent story. There have been a number of changes to the CRM services since CRM 3. So like all developers I start exploring and testing the old and new ways to communicate with CRM. To that end I created a test project and started by creating a reference to the new CRM service (http://ServerName/OrgName/XRMServices/2011/Organization.svc) .  The first method call to retrieve an account worked without any issues.
 
private CrmService.OrganizationServiceClient service = new CrmService.OrganizationServiceClient();

private void GetAccount_Click(object sender, EventArgs e)
{
   QueryExpression query = new QueryExpression("account");
   ConditionExpression condition1 = new ConditionExpression();

   condition1.AttributeName = "accountnumber";
   condition1.Operator = ConditionOperator.Equal;
   condition1.Values.Add("5500");

   // Build the filter based on the conditions.
   FilterExpression filter = new FilterExpression();
   filter.FilterOperator = LogicalOperator.And;
   filter.Conditions.Add(condition1);

   // Set the Criteria field.
   query.Criteria.AddFilter(filter);

   query.ColumnSet.AllColumns = true;
   ColumnSet columns = new ColumnSet();
   columns.AllColumns = true;
   Entity accnt = service.RetrieveMultiple(query)[0];
}

I then explored the options around early binding and wrote the following method based on a MSDN article (http://msdn.microsoft.com/en-us/library/gg334754.aspx).
 
private void btnEarlyBinding_Click(object sender, EventArgs e)
{
   OrganizationServiceProxy serviceProxy;
  IOrganizationService orgService;

   Uri orgUri = new Uri(@"http://ServerName/OrgName/XRMServices/2011/Organization.svc");

   ClientCredentials credentials = new ClientCredentials();
   credentials.Windows.ClientCredential = new System.Net.NetworkCredential("CrmAdministrator", "XXXXX", "domain");

   serviceProxy = new OrganizationServiceProxy(orgUri, null, credentials, null);

   // This statement is required to enable early-bound type support.
   serviceProxy.ServiceConfiguration.CurrentServiceEndpoint.Behaviors.Add(new ProxyTypesBehavior());
   orgService = (IOrganizationService)serviceProxy;

   Guid guid = new Guid("f02b9ae2-1aab-db11-964f-000cf15bffae");
   ColumnSet columns = new ColumnSet();
   columns.AllColumns = true;
   object acc = (Account)orgService.Retrieve("account", guid, columns);
}

When this code ran it caused a Null Reference Exception when calling the Retrieve method of the OrganizationServiceProxy object.
I was not able to determine the cause of this problem and there are other options so I continued to test code against the CRM service reference. Unfortunately I hit another issue. My code as mentioned is dynamic and uses reflection to identify the attribute types and update attribute values. Unfortunately with the new CRM service only populated CRM attributes are returned and reflection gives you a null reference error. I was then directed to look at the Metadata service. To my disappointment that resulted in an Invalid Operation Exception - 'Collection was modified; enumeration operation may not execute.' on calling the Execute method on the OrganizationServiceProxy object. See http://social.microsoft.com/Forums/en-US/crmdevelopment/thread/10273be7-22ec-4724-874f-a8f6bd133455 for more info on this issue.
At this point I started to doubt the stability of the environment and started to look for way to isolate the problem. I started to look at the SDK and opened one of the packaged solutions. To my surprise it worked fine. I copied the code that was failing to the SDK project and it also worked. So the code is fine so what is the problem??
I need to know the cause so started comparing all the different characteristics of the 2 projects. As soon as I removed the service reference to the CRM services the early binding code started to work.
When I added it back in I started to get exceptions again.
Hope this helps someone.
Geordie

Posted On Thursday, September 1, 2011 8:32 AM | Comments (0) |

Sunday, February 6, 2011

Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema. 
Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field):
  • Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags.
  • RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format.
  • String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line.
I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code.
I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown.
The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.
 
Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy.
When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags.
Reproduction of Mustansir blog:
Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags).
In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like:
<ReceiveIdoc xmlns='http://Microsoft.LobServices.Sap/2007/03/Idoc/'><idocData>EDI_DC40 8000000000001064985620
E2EDK01005 800000000000106498500000100000001
E2EDK14 8000000000001064985000002000000020111000
E2EDK14 8000000000001064985000003000000020081000
E2EDK14 80000000000010649850000040000000200710
E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc>
(I have trimmed part of the control record so that it fits cleanly here on one line).
Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here!
During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and:
(a) Enter the body path expression as:
/*[local-name()='ReceiveIdoc']/*[local-name()='idocData']
(b) Choose "String" for the Node Encoding.
What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data.
You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.
 
This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast…
Note: When I followed Mustansir’s blog the characters at the end of each line disappeared.
After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions.
There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for:
'Z2EDPZ7'
The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.".
Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character.
The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines.
Execute method of the custom pipeline component:
public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg)
{
//Convert Stream to a string
Stream s = null;
IBaseMessagePart bodyPart = inmsg.BodyPart;
 
// NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a
//getter and setter for the file adapter. Use GetOriginalDataStream to get data instead.
if (bodyPart != null)
s = bodyPart.GetOriginalDataStream();
 
string newMsg = string.Empty;
string strLine;
try
{
StreamReader sr = new StreamReader(s);
strLine = sr.ReadLine();
while (strLine != null)
{
//Execute padding code
if (strLine != null)
strLine = strLine.PadRight(1064, ' ') + "\r\n";
newMsg += strLine;
strLine = sr.ReadLine();
}
sr.Close();
}
catch (IOException ex)
{
throw new Exception("Error occured trying to pad the message to 1064 charactors");
}
 
//Convert back to stream and set to Data property
inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ;
//reset the position of the stream to zero
inmsg.BodyPart.Data.Position = 0;
return inmsg;
}

Posted On Sunday, February 6, 2011 12:45 PM | Comments (0) |

Tuesday, August 17, 2010

BizTalk 2010, BAM, SharePoint and the Default Application Pool Flag

I have recently started to look at EDI communications through BizTalk. Part of the EDI features in BizTalk is the use of BAM to track messaging.  Amongst the applications we currently have built in BizTalk we do some SharePoint communication.  This is also a possible component to our EDI solution.  We also have plans to move to BizTalk 2010 in the near future, so today I started to build my new BizTalk Dev machine.

I got a bit of a surprise when 2 parts of the required set had me setting a flag “Enable 32-bit applications” to opposite values.

First configuration:  I was working the setup document available from Microsoft ‘Installing BizTalk Server 2010 on Windows Server 2008 R2 and 2008.docx’. 

On pg 13…
Enable Internet Information Services
Microsoft Internet Information Services (IIS) provides a Web application infrastructure for many BizTalk Server features. BizTalk Server requires IIS for the following features:
• HTTP adapter
• SOAP adapter
• Windows SharePoint Services adapter
• Secure Sockets Layer (SSL) encryption
• BAM Portal
To enable Internet Information Services 7.5
1. Click Start, point to Administrative Tools and then click Server Manager.
2. ….Step by step instruction here
Note
BAM Portal runs only runs on a 32-bit mode. If you are installing IIS on a 64-bit machine then you must ensure that ASP.NET 2.0 is enabled on 32-bit mode. To do this, follow these steps:
1. On the taskbar, click Start, point to Administrative Tools, and then click Internet Information Services (IIS) Manager.
2. In the Connections pane, expand the server name, and then click Application Pools.
3. In the Actions pane, click Set Application Pool Defaults...
4. On the Application Pool Defaults dialog box, in Enable 32-bit applications, select True.
 
I set the Enable 32-bit applications flag as specified and continued with the install.

Second configuration:  As mentioned we use SharePoint and to use the SharePoint adaptor we need to install the SharePoint Foundation 2010 or WSS3(SP2) components.  After installation and configuration the SharePoint Sites where inaccessible and returned a HTTP error 503 ‘Site unavailable’ message.  The application pools for the SharePoint components were stopped each time I tried to access the SharePoint sites.  After a bit of searching I found that to get the site to work I needed to set the ‘Enable 32-bit applications, to False’.

Resolution: I was able to resolve the issue by setting the default application Pool’s Enable 32-bit applications to true as described in the text.  I then went to each SharePoint application pool and right clicked.  In the context menu there is an ‘Advanced Settings…’ option.  Selecting this allowed me to set the ‘Enable 32-bit Applications’ flag for that specific pool.  I was then able to access the SharePoint site even though the Default Application Pool's flag was set to 'true'.

Hope this helps, Geordie...

Note:  When I tried to configure the BizTalk server using the basic configuration tool it failed badly.  Once I gave the BizTalk service account create permissions on the database I was able to automaticaly configure BizTalk up to the BAM components.  after that I had to manually configure each piece, setting the 'default application pool' appropriatly for the componet I was configuring.  Also when configuring the SharePoint adaptor you need to select the correct website (ie the 'SharePoint-80' site from the dropdown list, the default web site will not work).

Posted On Tuesday, August 17, 2010 3:06 PM | Comments (0) |

Thursday, March 25, 2010

Adding Custom Pipeline code to the Pipeline Component Toolbox

Add ...

"C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\GacUtil.exe" /nologo /i "$(TargetPath)" /f
 xcopy "$(TargetPath)" "C:\Program Files\Microsoft BizTalk Server 2006\Pipeline Components" /d /y

to the 'Post Build event command line' for the pipeline support component project in visual studio. This will automatically put the support dll in the GAC and BizTalk’s Pipeline component directory on building the solution/project.

Build the project.

Right Click on title bar and select Choose items. Click on the BizTalk Pipeline Components tab and select the new component from the list. If it is not available in the list Browse to the BizTalk pipeline component folder (C:\Program Files\Microsoft BizTalk Server 2006\Pipeline Components) and select the dll.

Posted On Thursday, March 25, 2010 1:55 PM | Comments (1) |

Thursday, March 18, 2010

Click Once Deployment Process and Issue Resolution

Introduction
We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments.
Why Use Click Once over SCCM
If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed.
New Deployment Steps Overview
The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results.
1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’)

2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’)

3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’)

Detailed Deployment Steps
Deploying Click Once Code From Visual Studio
Open Visual Studio and create a new WinForms or WPF project.
 


In the solution explorer right click on the project and select ‘Publish’ in the context menu.


 
The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish.
Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.


 
The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.
Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.
 


Directory structure created by Visual Studio
 
 
Application files created by Visual Studio
 


Development web site (install.htm) created by Visual Studio


Migrating Click Once Code to a new Server without using Visual Studio
To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web.
When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’. 
Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.
 
Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder).
Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.
 
When MageUI tries to save the file it will prompt for the file to be signed.
 
This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.
 
Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.
 

 
Updating the Configuration File of a Click Once Deployment Package without using Visual Studio
When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes.
Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).
 
Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file.
Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten.


Settings configuration view in Visual Studio


Production Deployment
When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.
 

Troubleshooting
Clicking the install button on the install.htm page fails.
Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>
 
This is due to the setup.exe file pointing to the wrong location.
The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

 

 

Posted On Thursday, March 18, 2010 1:47 PM | Comments (1) |

Thursday, February 4, 2010

How to debug SharePoint feature receivers

 

Just a quick one to detail how this is done in case someone finds it useful. For those still getting into working with SharePoint features, a feature receiver is a class which contains some code you've written to execute when a feature gets activated. Or deactivated, installed or uninstalled.

The key thing to note is that it's just standard ASP.Net debugging. The process is:-
 

  • deploy the assembly to the runtime location, either the GAC or the site bin directory. Note that if it's the bin directory your feature will also need appropriate CAS policy to grant the code the permissions it requires.
  • deploy the .pdb file to the same location. If this is the GAC, you can do the following:-

    - map a drive to the GAC folder i.e. C:\WINDOWS\assembly but using a UNC path such as [MachineName]\C$\WINDOWS\assembly. This allows you to browse the GAC without the shell which the framework puts on the folder, thus allowing you to see the actual structure of the files on disk.
    - locate the GAC_MSIL subfolder. In here you will see a directory for each assembly currently stored in the GAC. Find the directory for your assembly, and add the .pdb file so it sits next to the dll.


     
  • In Visual Studio, attach the debugger to the w3wp.exe process. Note that occasionally there will be 2 of these processes (e.g. when the process is being recycled), and it's possible to attach to the wrong one. Either do an IISReset to stop them both so that only 1 spins up with the next web request, or type 'iisapp' at the command prompt to get the process IDs of the running w3wp.exe processes. You can then match the correct one to the list which appears in the 'Attach debugger' dialog.
  • Activate the feature through the web UI (Site settings > Site Collection features/Site features). The debugger will now stop on any breakpoints you set.

And remember that the assembly must be built in debug mode so that the symbols are created.

Thanks to Chris O'Brien for this info.
(NB this is a copy of his post stored here for my own reference.  Any questions, please follow the link to Chris's site.)

 

Posted On Thursday, February 4, 2010 7:20 PM | Comments (0) |

Monday, February 1, 2010

Using OWSSVR.dll to filter SharePoint data on the server side

The use of the OWSSVR.dll to filter data on the server side is a great idea as SharePoint can be slow and it tends to become the corporate dumping ground for all documents.  Why delete it when we can retain it for auditing...
So to be able to filter the data to the current entries before transmitting the data over the wire is great.  It look like the perfect solution for my problem and it took a number of confussed hours before I found the posting by GaryJ @ Novotronix.  I have copied it below.
owssvr.dll can be used in InfoPath projects to provide filtered or cascading drop down lists. Use an XML datasource - the syntax is http://yourserver/yourweb/_vti_bin/owssvr.dll?Cmd=Display&List={guid}&XMLDATA=TRUE.
However, beware if your list contains a lookupfield that allows multiple selections. If you have this, then the above syntax will return an invalid XML, or a blank data set.
List contains lookup field(s) with single selection - syntax works.
List contains lookup field(s) with multiple selection - syntax fails.
 Links:

Posted On Monday, February 1, 2010 8:21 PM | Comments (1) |

Powered by: