Sunday, February 5, 2012
When you first create a new table, there is no index created by default. In technical terms, a table without an index is called a “heap”. We can confirm the fact that this new table doesn’t have an index by taking a look at the sysindexes system table, which contains one for this table with an of indid = 0. The sysindexes table, which exists in every database, tracks table and index information. “Indid” refers to Index ID, and is used to identify indexes. An indid of 0 means that a table does not have an index, and is stored by SQL Server as a heap.
Creating a Non-Clustered Index
Now, we will create a unique non-clustered index on the empid column to see how it affects the data, and how the data is stored in SQL Server.
CREATE UNIQUE NONCLUSTERED INDEX DummyTable1_empid
ON DummyTable1 (empid)
A clustered index determines the physical order of data in a table. A clustered index is analogous to a telephone directory, which arranges data by last name. Because the clustered index dictates the physical storage order of the data in the table, a table can contain only one clustered index. However, the index can comprise multiple columns (a composite index), like the way a telephone directory is organized by last name and first name.
Clustered indexes are also efficient for finding a specific row when the indexed value is unique. For example, the fastest way to find a particular employee using the unique employee ID column emp_id is to create a clustered index or PRIMARY KEY constraint on the emp_id column.
Note PRIMARY KEY constraints create clustered indexes automatically if no clustered index already exists on the table and a nonclustered index is not specified when you create the PRIMARY KEY constraint.
Alternatively, a clustered index could be created on lname, fname (last name, first name), because employee records are often grouped and queried in this way rather than by employee ID.
How Does a Non-Clustered Index Work?
A table can have more than one Non-Clustered index. But, it should have only one clustered index that works based on the Binary tree concept. Non-Clustered column always depends on the Clustered column on the database.
Most database administrators are familiar with the potential performance benefits they can gain through the judicious use of indexes on database tables. Indexes allow you to speed query performance on commonly used columns and improve the overall processing speed of your database.
Microsoft SQL Server supports two types of indexes:
Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index. If a PRIMARY KEY constraint is created for a database table and no clustered index currently exists for that table, SQL Server automatically creates a clustered index on the primary key
Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself.
SQL Server 2000 supports a maximum of 249 non-clustered indexes per table. However, it’s important to keep in mind that non-clustered indexes slow down the data modification and insertion process, so indexes should be kept to a minimum
One of the hardest tasks facing database administrators is the selection of appropriate columns for non-clustered indexes. You should consider creating non-clustered indexes on any columns that are frequently referenced in the WHERE clauses of SQL statements. Other good candidates are columns referenced by JOIN and GROUP BY operations.
You may wish to also consider creating non-clustered indexes that cover all of the columns used by certain frequently issued queries. These queries are referred to as “covered queries” and experience excellent performance gains.
SQL Server provides a wonderful facility known as the Index Tuning Wizard which greatly enhances the index selection process. To use this tool, first use SQL Profiler to capture a trace of the activity for which you wish to optimize performance. You may wish to run the trace for an extended period of time to capture a wide range of activity. Then, using Enterprise Manager, start the Index Tuning Wizard and instruct it to recommend indexes based upon the captured trace. It will not only suggest appropriate columns for queries but also provide you with an estimate of the performance increase you’ll experience after making those changes!
Saturday, February 4, 2012
Nationwide search on dice.com
Feb. 4, 2012
C# - 8,518
.NET MVC 1,101
Microsoft Dynamics CRM 461
SQL Server 14,390
Job Title: Empty ; Skills/keyword: mvc location: empty
Results: 523 jobs nationwide (mostly java )
Change to Skills/keyword ASP.NET MVC
Results: 80 jobs nationwide
.net mvc = 129
groovy = 66 jobs
grails = 44 jobs
Java > 5,000
Delphi = 93
VB = 2,559
Visual Basic 2,604
ORM - 150
Hibernate - 851
NHibernate - 53
Linq = 162
Entity Framework = 97
fxcop = 5 jobs
Oracle = >5,000
SQL Server = 4,148
mysql = 1,431
db2 = 699
Nunit = 120
JUnit = 422
Selenium = 142
Coldfusion = 304
Perl = 2,371
PHP = 1,540
Python = 1,029
Ruby on Rails = 221
Automation Engineer = 297
Spring = 1,710
Subverion = 559
VSS = 83
Visual Source Safe = 120
CVS = 824
StarTeam = 14
vmware = 1,326
Virtualbox = 3
C++ = 4,266
are a powerful tool to writing quick, concise code. They can be used in numerous situations, most notably in conjuction with LINQ statements.
A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types.
All lambda expressions use the lambda operator =>, which is read as “goes to”. The left side of the lambda operator specifies the input parameters (if any) and the right side holds the expression or statement block. The lambda expression x => y * z is read “x goes to y times z.”
Here’s how a typical LINQ query looks like without using lambda expressions:
var query = from m in db.PersonalMessages
where m.ConversationID_FK == ConversationID
Okay, not bad. It looks like a SQL query in reverse (i.e. you start off with “from” and end with “select”). This is so that intellisense can help you out—if you started with select, it would have on idea where you were going with the query.
we are selecting PersonalMessages where the ConversationID is equal to some value passed into the method. The query itself is kinda verbose, and I like to keep my code short and sweet, so how do I re-write this using lambda expressions? Here is the code, and then we’ll break it down:
var query = db.PersonalMessages
.Where(m => m.ConversationID_FK == ConversationID);
We got rid of “select’”, “from” and “in”, etc. Cleaned it up quite a bit. What all is happening though? Basically it’s written out like:
My Query = Get Personal Messages from my data context Where the ConversationID is equal to this #.
In other words, it reads exactly like it does in the first written out query, but it’s done faster. The where statement is like a mini-method. You’re defining some variable M (you can use any letter, I always use m for the sake of convention in my code) and then in this case you’re giving m some condition to work with.
You’re basically saying, define m real quick (m=>) as a stand-in for PersonalMessage. Then only return PersonalMessages where m’s ConversationID is equal to some value.
Friday, February 3, 2012
Then install the android plugin
Basic Hello World
KSoap2 for WCF
Thursday, February 2, 2012
Recently someone was showing me WCFStorm for testing WCF services, which is very simple to use.
However, SOAPUI certainly has a lot of power.
I needed to add an image to send to a service and noticed that references
My XML ended up looking like this:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ubox="UBoxMobileService.Interfaces" xmlns:ubox1="UBoxMobile.Data.Classes">
Pros and Cons of NOLOCK
I have seen some developers use WITH (NOLOCK) when querying in SQL Server and wonder why they use. Now i explored it and found that its useful to improve the performance in executing the query. However there is a disadvantage in using it. The disadvantage is that one may not be sure that they are getting the data which is currently being updated in the Table ie Without lock protection, you cannot be guaranteed that the data isn’t changing during the time that the query is running.
With these "Hints" you can quickly see where problems can occur:
SQL Server NOLOCK Hint & other poor ideas.
This table hint, also known as READUNCOMMITTED, is applicable to SELECT statements only. NOLOCK indicates that no shared locks are issued against the table that would prohibit other transactions from modifying the data in the table.
The benefit of the statement is that it allows you to keep the database engine from issuing locks against the tables in your queries; this increases concurrency and performance because the database engine does not have to maintain the shared locks involved. The downside is that, because the statement does not issue any locks against the tables being read, some "dirty," uncommitted data could potentially be read. A "dirty" read is one in which the data being read is involved in a transaction from another connection. If that transaction rolls back its work, the data read from the connection using NOLOCK will have read uncommitted data. This type of read makes processing inconsistent and can lead to problems. The trick is being able to know when you should use NOLOCK.
As a side note, NOLOCK queries also run the risk of reading "phantom" data, or data rows that are available in one database transaction read but can be rolled back in another.
The following example shows how NOLOCK works and how dirty reads can occur. In the script below, I begin a transaction and insert a record in the SalesHistory table.
INSERT INTO SalesHistory
(Product, SaleDate, SalePrice)
('PoolTable', GETDATE(), 500)
The transaction is still open, which means that the record that was inserted into the table still has locks issued against it. In a new query window, run the following script, which uses the NOLOCK table hint in returning the number of records in the SalesHistory table.
SELECT COUNT(*) FROM SalesHistory WITH(NOLOCK)
The number of records returned is 301. Since the transaction that entered the record into the SalesHistory table has not been committed, I can undo it. I'll roll back the transaction by issuing the following statement:
This statement removes the record from the SalesHistory table that I previously inserted. Now I run the same SELECT statement that I ran earlier:
SELECT COUNT(*) FROM SalesHistory WITH(NOLOCK)
This time the record count returned is 300. My first query read a record that was not yet committed -- this is a dirty read.