Improve SQL Performance

Recently we are assigned a task to improve the performance of a couple of SQL queries that run for 15 seconds and 30 seconds. At the beginning we were not so sure whether the performance can be improved anymore, given that there were about 80 million data in the database. But I have to admit that SQL Server is really powerful as eventually we are able to reduce the time down to 2 seconds and 6 seconds!

During the process, I have had a few interesting findings about SQL Server and learned a lot about troubleshooting SQL Server performance issues. Here I want to share a few tips on optimizing SQL statements I learned from the experience.

check Index first

We sometimes get ahead of ourselves and jump directly into analyzing SQL queries, but the answer to the performance issue may just lie on the indexing itself, so always check whether necessary indexes are presented. A good way to find out what indexes you need to is to use Dynamic Management Views to monitor system health and come up necessary indexes. With appropriate indexes in place, performance increase will be substantial.

Clustered vs. Non-clustered index

There are tons of articles about Clustered and Non-clustered index and their benefits. Knowing the difference between clustered and non-clustered index helps troubleshooting performance issue for sure. Here is a nice article explaining Clustered and Non-clustered indexes.

Here I would like to share a metaphor I like which helped me understand Clustered and Non-clustered index – Clustered index is like the page numbers of a book where the logic order of the pages is the same as the physical order of them in the book, e.g. page 1 is at the first and page 100 is the last page for a 100 pages book. On the other hand, Non-clustered index is like the Index page of a book where the logic order of the terms on the index page is not the same as their actual locations in the book. For instance, the word Apple is listed at the beginning of the index page but it may firstly appear on the last page of the book.

Analyze, trial and error

To really find out why a certain query is running slow, more information is needed.

VIEW Execution plan and set Statistics io

Execution plan can be quite helpful to figure out which parts of a complex query are taking time and how SQL server optimized the query. Though it does not provide a solution directly, it does provide some hints or directions on troubleshooting performance issues. SQL Management Studio or even Visual Studio now provide the ability to view execution plan. A good tool I find quite useful is SQL Sentry Plan Explorer which displays the execution plan diagram with much more information, for example, it displays the number of records processed above each line, which is very convenient.

Another quite handy tool, Statistics IO, displays statistics information for your query.  Set Statistics IO ON and information such as the number of scans will be displayed.

Break nested queries

However, execution plan may not  be very straightforward sometimes, especially for a complex query with many nested sub-queries. The large query will be optimized by SQL Server which makes the execution plan harder to understand. Instead of trying to figure out what SQL Server is doing,  a faster way for me is to just break the large query into smaller ones and test the performance of each of them. It is possible that SQL Server is not picking the best route to run the query and the ‘optimized’ query is actually running much slower, which is exactly what happened in my case. My query should only takes 2 seconds to run whereas the ‘optimized’ query by SQL Server takes 15 seconds.  Why? Optimizing query also takes time and SQL Server may just not find the best optimization in time.

If SQL Server is selecting a slower way to execute the query, there are a couple of workaround:

  1. Use OPTION (FORCE ORDER) at the end of the query to force the order of join to be the same as it is in the query statement.
  2. Break nested sub-queries into temporary tables. This way you can force SQL Server to materialize sub-queries.
Trial and Error

Another quick and easy approach is just try different ways of writing the same query. Swap IN with JOIN, change INNER JOIN to LEFT JOIN to EXIST,  or change Non-Unique Non-clustered index to Clustered index. Some changes just take a few seconds to make and they may just work. Once it works, you can research a bit on why or may just choose to forget about it if not in the mood. I happen to know someone (me) who is always not in the mood 🙂

How to write to Windows Azure Website’s application log and more

We struggled with Windows Azure website because we couldn’t write logs. Using Log4Net to write to log file as we used to do, does not work for Azure Website because of the limited file permission. We had to resort to email notification or Virtual Machine when we needed to debug Azure Websites, which was a big headache!

Fortunately, it is all over now. To write to the Application Log of Azure Website, just use System.Diagnostics.Trace name space and use method like TraceInformation, TraceError, TraceWarning to record different levels of log!

Trace.Wrietline("Log Verbose level log");

Trace.TraceInformation ("Log Information level log");

Trace.TraceWarning("Log Warning level log");

Trace.TraceError("Log Error level log");

Then just turn on the Application Logging and select a logging level for that Azure Website.

4-16-2015 5-41-11 PM

With this feature, it becomes much easier to troubleshoot Azure websites. Even better, Microsoft provides this streaming log function from which you can view application logs in REAL-TIME! (New Azure portal only)

4-16-2015 5-29-27 PM

Furthermore, here is something developers will definitely like – this streaming log is also available in Visual Studio, and you can filter the result using Regular Expression! (Latest version Azure SDK is required)

4-16-2015 5-33-57 PM

4-16-2015 5-36-13 PM

Since file logging is supposed to be turned off automatically after 12 hours, if you also want to log into a table storage, not a problem. You can set up a Azure Storage to hold the log.

4-16-2015 5-42-41 PM

Click View settings of the Azure Website in Visual Studio.  In the Log tab, there  will  be a nice table view of the log. I do notice that it uses a lot of memory of the Azure Website. Just something to consider.

4-16-2015 5-43-48 PM

That’s what I know about logging to Azure Websites. Hopefully it is helpful. We use Azure on most of our web applications and I think it is just getting better everyday. Now with the ability to write application log for Azure Website, it just meets all of our needs, but there is still a huge set of exciting features we haven’t used. I look forward to exploring those someday!

How to use soapUI 5.0 to post JSON to REST Service

I wrote an article on the same topic but with version 4.5.1 soapUI back in 2013. To post JSON with an 4.5.1 version soapUI, you almost have to trick the application by explicitly typing in the media type ‘application/json’, and there are also a few settings that don’t really make sense. I’ve always forgotten the steps and had to go back to check my blog post.

Since then I have tried a couple of newer versions of soapUI for the same task, hoping it got more intuitive, but eventually I stuck with the 4.5.1 version because I didn’t see much improvement in those versions. Today I get my hands on the latest 5.0 version of the tool. Still free, great!
First, I crate a new SOAP project. The ‘Opens dialog to create REST Service’ option is gone. OK. It is already simpler from the first step.

Now my project is created, I right click the project and select New REST Service from URI.


In the popup window, I put in the service endpoint where I am going to post my JSON to.


BOOM! It creates everything for me. No need to provide a Service Name, Parameter Name, or Method Name, everything is extracted from the endpoint provided. This is a great UI design because if out of nowhere it asks the user to provide a Service Name, she will be confused – What is the Service Name for?


It also automatically opens up Request1 where you can see it by default selects GET method.


I changed the Method to POST. It selects the correct Media Type for JSON posting. Just type in your JSON body and click the green arrow on the left top corner to post.


I am impressed by how easy the process becomes and glad that the development team is putting efforts on improving user experience, even though it is already a well-functional application. User experience is really a big part of software. A good UX can really change your life!

Use Windows Authentication on WCF service behind a SSL handler

After my last blog post about using Cert-based Message security for WCF web service, we started to look into using Windows Authentication for a different system that also sits behind a load balancer/SSL handler. Windows Authentication provides a much easier integration option – client side can simply provide a domain user account to be authenticated, where as in Cert-based authentication, each client needs to install a certificate. This increases difficulties for clients to develop against the service and is our motivation to look into utilizing Windows Authentication instead.

With the experience of cert-based authentication, I was pretty sure it wasn’t going to be easy to use Windows Authentication in a load balanced environment. First thing we tried of course is to follow Microsoft’s guide to use wsHttpBinding with Windows Authentication and Message Security, with one different is that our client needs to use Transport security instead of Message because it must use HTTPS.

Like we thought, this setup didn’t work because the service expect to use Message security but the client is using Transport security. We then tried TransportWithMessage credential and some other settings. None of them works. We were stuck on this error message “The HTTP request is unauthorized with client authentication scheme ‘Ntlm’. The authentication header received from the server was ‘Negotiate,NTLM’.“, which unfortunately is one of those error messages that do not make sense.

In the painful process of pursuing truth, we came across some post raising the problem level to the load balancer level, which discouraged us to keep researching. It seemed more reasonable to find an alternative at that point, and we did find out that using BasicHttpBinding with Windows Authentication and TransportCredentialOnly worked in our environment.

Here is our client setup:

  <binding name="BasicHttpBinding">
   <security mode="Transport">
    <transport clientCredentialType="Basic" />

Service setup:

  <binding name="BasicHttpEndpointBinding">
    <security mode="TransportCredentialOnly">
      <transport clientCredentialType="Basic"/> 

The problem with this is that the credentials of the client is passed in clear text. Although message before the land balancer is protected by HTTPs, still we want a true end-to-end protection on the credentials. So this solution is off the table. We decided to go back to our original plan.

I will just skip to the end of the story because I like magic!… We eventually found a solution that worked in the load balanced environment. Custom binding once again saved the world! I didn’t find any article about this configuration, which makes it more important to share it with everyone.

Client side:

  <binding name="customBinding_WindowsAuth">
      <readerQuotas />
    <security authenticationMode="SspiNegotiated"></security>
    <httpsTransport  authenticationScheme="Anonymous"  
      proxyAuthenticationScheme="Anonymous" useDefaultWebProxy="true">

Service side:

 <binding name="WsBindingConfigration" >      
   <security mode="Message">
     <message clientCredentialType="Windows" 
     negotiateServiceCredential="true" algorithmSuite="Default" 

Use Message security on WCF service behind a SSL handler

Configuring WCF web service’s security is just tedious. Microsoft has been trying to make it simple by removing many configuration settings in .NET 4.5 but it could still get messy if you need to touch the security part – There are many bindings and there are Message, Transport, and TransportWithMessageCredentials security modes, each with their own client credential types, not mentions all those authentication modes for Custom Bindings such as AnoymousForCertificate, IssuedTokenForCertificate, IssuedTokenOVerTransport, etc.

Developers are developing WCF web service on the platform from Microsoft. They are the users of the platform in this sense. It’s supposed to be user friendly and intuitive. But I found it is quite difficult to select the right security configuration in different scenarios. Even after you read the documentation from Microsoft carefully, you sill have a very limited idea on how these security modes differentiate from each other.

This post is aimed to cover one small scenario of using WCF security – using a WCF web service with cert-based Message security behind a front-end SSL handler. Often times, your web servers are behind a load balancer that handles all SSL requests and pass in HTTP requests to your IIS. Below is diagram showing the infrastructure.

8-8-2014 4-17-55 PM

At first glance, it seems pretty straightforward – Transport security mode covers SSL security, Message security mode handles message encryption. Hey there is a security mode just for the two modes combined: TransportWithMessageCredentials . We should be able to just use that on both client and service side to achieve what we want, right? However, you just can’t be so optimistic in the world of software development.

This configuration won’t working. First of all, since the service is not really receiving HTTPS requests, Transport mode should not be used. We just need Message security. So below is the correct configuration on service side.

     <binding name="WsBindingConfigration">
     <security mode="Message">
      <message clientCredentialType="Certificate" negotiateServiceCredential="true"
algorithmSuite="Default" establishSecurityContext="false"/>

The client side is where it gets tricky. I can’t really explain why TransportWihtMessageCredential doesn’t work. Something goes wrong under the hood. But here is the configuration worked for me. Use MutualSSLNegotiated mode and CustomBinding!

        <binding name="customBinding_CertAuth_ClientService">
           <security authenticationMode="MutualSslNegotiated">
           <httpsTransport authenticationScheme="Anonymous"
proxyAuthenticationScheme="Anonymous" useDefaultWebProxy="true">

It took me a long while to research and trial and error to finally figure this out. Many development teams don’t have the time to mirror the environments of their clients, which makes finding and troubleshooting issues like this difficult. But hopefully this post can help you out.


Session state is not available in this context

Who would know that HttpApplication.Session returns an null exception when the session is null, whereas HttpContext.Current.Session returns null when the session is null?

I have this ASP.NET MVC application that checks session status in Application_AcquireRequestState handler. If a session is a newly created session and a cookie containing session id has been included in the request header, the codes sign the user out (because the user’s session has expired).

The problem is that before checking whether the user’s session is a new session, I need to make sure that the session is not null. When doing this I first attempted using this.Session which is originally from base class HttpApplication.

 if (Session != null && Session.IsNewSession)

However, when later I deployed it to client’s environment I noticed that there were many Session state is not available in this context error thrown out. I also noticed an interesting thing – when Debug mode is on, the error is not being thrown out. Which is partly why I didn’t encounter this when development. However, this behavior seems not consistent – Today when I tested the same piece of code in a different environment, the same exception got thrown out only when in Debug mode.

So this issue can occur depends on the environment. What’s going on and what do we do?

The answer is – use HttpContext.Current.Session instead.

It turns out that the implementation of HttpApplication.Session will throw an exception when the session is null. So basically you cannot even check its value like what I did. According to an answer from Stackoverflow, the implementation is something like this:

[Browsable(false), DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public HttpSessionState Session
HttpSessionState session = null;

if (this._session != null)
session = this._session;
else if (this._context != null)
session = this._context.Session;

if (session == null)
throw new HttpException(SR.GetString("Session_not_available"));

return session;

OperationContext.Current is Null

Today I tried to run a past WCF project and it refused to work. The error message was OperationContext.Current is null. The project uses a third-party written WCF custom encoder that enables SOAP with Attachment, so it made that harder to debug the issue. After struggled with it the whole day, I was surprised to find that the issue was because of a WS-Addressing Action mismatch between my client and service. It turned out that in a certain version of codes I modified the action name of both client and service and today my codes were not at the latest version thus the client and service were having different action values.

What was misleading was that the error message didn’t mention anything about the action. It only said about OperationContext.Current being null. And I have explicitly used OperationContext at my client side:

PayloadsSoapClient Client = new PayloadsSoapClient("CustomBinding_PayloadsSoap_Swa");
using (OperationContextScope scope = new OperationContextScope(Client.InnerChannel))

And the code below shows the contract and binding used by this client:

      <PayloadsSoap name="CustomBinding_PayloadsSoap_Swa"/>

The client is using a contract “PayloadsSoap” and that is where the issue comes from. On client side we have set the WS-Addressing as (OperationContractAtrribute.Action) which should match the same property from service side. Below is the client side contract.

[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "")]
public interface PayloadsSoap
service1Response service1(service1Request request);

Note: The OperationContractAttribute.ReplyAction above defines the value of SOAP action and a mismatch in that does not cause the issue I have.

I have also run a few tests after having identified the issue to be sure. In my tests, whenever there is a mismatch on WS-Addressing Action, an exception is thrown by the custom encoder that needs to edit the OperationContext.Current object. My guess is that because of the mismatch, the OperationContext used by client becomes different from the OperationContext used by the service.

Next time I will make sure that my local copy of codes is up to date first.