Tuesday, August 21, 2012

Profile specific web.config transforms and transform preview

When we released VS2010 we add support for web.config (XDT) transforms during publish/package. Note: From now on I’ll only use the word publish from now on but the full content relates to packaging as well. In the original implementation when you published your web project the web.config file would be transformed by the file web.{Configuration}.config, where {Configuration} is the project build configuration. For example Debug, or Release. If you publish on Release and there exists a web.release.config we will take your web.config and transform it with web.release.config before publishing.

Cascading web.config transformations

In VS 2012 (as well as the publishing updates for VS2010 through the Azure SDK) now support the concept of publish specific transforms. You can also now specify the project configuration used for a profile when publishing on the publish dialog.
SNAGHTML45b19fc
In this case I have created a profile named Production and set the Configuration to Release. When I publish this project the following transformations will be applied (if the files exist) in this order.
  1. web.release.config
  2. web.production.config
I think that we got this wrong when we initially implemented the support. We should have created profile specific transforms instead of ones based on build config, but having these cascading transforms are still pretty useful. For example I may want to remove the attribute debug=”true” from the compilation element and then inside of the profile specific transform we would override appSettings/WCF endpoints/logging config/etc for that environment.
In VS there is a right-click option on web.config for Add Config Transform, but we were not able to update the functionality of that to automatically create profile specific transforms. Don’t worry, it will be released soon with one of our updates for web tooling. For now you will need to create a new file with the correct name and add it to your project. Note: if you want it to show up nested under web.config you’ll need to add the metadata Web.config to the item in the .csproj/.vbproj file.

web.config transform preview

Previously the only way to test the functionality for these transformation was to actually publish or package the web project. This gets old pretty quick. In order to simplify creating these transforms we have introduced the Preview Transform menu option. This is the coolest feature in VS 2012 (OK I’m a bit biased, but still its the coolest).
image
In my web.release.config I have left the default contents, which just removes the debug attribute. Here is what I see when I select this on web.release.config for my project.
image
You can see that in the image above we can see that the debug flag was indeed removed as expected.
In my web.production.config I have a transform which simply updates the email app setting value. Here is the really cool part when I preview the transform for web.production.config the previewer will look into the profile and determine the build configuration which has been configured, and it will ensure that transform is applied before the profile specific one. For example take a look at the result for web.production.config.
image
In the image above you can see the note that web.release.config was applied first followed by web.production.config. In the result we can see that web.release.config removed the debug flag and that web.production.config updated the email address value.
We also do a little bit to help out in case there are errors in either the web.config or a transform. You can see errors in the Output Window and double click it to go directly to where the error exists.
Note: Scott Hanselman has a 5 minute video showing this and other updates.
Another note: If you need to transform any file besides web.config during publish then install my extension SlowCheetah.

Cross posted to http://sedodream.com/2012/08/19/ProfileSpecificWebconfigTransformsAndTransformPreview.aspx
Sayed Ibrahim Hashimi | @SayedIHashimi

Friday, August 17, 2012

Performance Tips for Asynchronous Development in C#

GET CODE HERE
In a recent online C# Corner column, "Exceptional Async Handling with Visual Studio Async CTP 3", I showed how the Visual Studio Async CTP (version 3), which extends Visual Studio 2010 SP1, handles aggregating exceptions that happen in background, asynchronous methods. In this column, I'm going to cover the mechanics of the Async framework and offer some tips on maximizing its performance.
Breaking Up Is Hard to Do
A deep dive of exactly how the C# compiler implements Async is beyond the scope of this article. Instead, I'll highlight how the compiler breaks up and rearranges my code, so that I can write it in a synchronous fashion but still have the runtime execute it asynchronously.

Here's a simple example in a Windows Forms application (project "BreakingUpAsync" in the sample code). I have a single button on my form and when I click it, the form's caption will display the current time for the next 15 seconds:
private async void button1_Click(object sender, EventArgs e)
{
  var now = DateTime.Now;
  button1.Enabled = false;
  for (var x = 0; x < 15; x++)
  {
    this.Text = now.AddSeconds(x).ToString("HH:mm:ss");
    await TaskEx.Delay(1000);
  }
  button1.Enabled = true;
}
Nothing fancy here. I disable the button at the start of the loop. Inside the loop, I update the form's caption and wait for one second. Finally, I re-enable the button.
Remove the "async" keyword, the "await" keyword and change TaskEx.Delay(1000) to Thread.Sleep(1000), and without Async support, I'd lock up the UI. See my previous column, "Multithreading in WinForms"), for more details. However, thanks to Async support, this code runs just fine with a fully responsive UI. How?
First, I pull out ILSpy, an open source .NET assembly browser and decompiler. ILSpy makes inspecting the IL generated by the C# compiler much easier. If you're a fan of IL, just use the MSIL Disassembler (Ildasm.exe).
Here's what my button1_Click event handler looks like after it's compiled (I've massaged the names a bit because the type names generated by the compiler can be pretty ugly to read):
private void button1_Click(object sender, EventArgs e)
{
  Form1.button1ClickCode clickInstance = new Form1.button1ClickCode(0);
  clickInstance.<>4__this = this;
  clickInstance.sender = sender;
  clickInstance.e = e;
  clickInstance.<>t__MoveNextDelegate = new Action(clickInstance.MoveNext);
  clickInstance.$builder = AsyncVoidMethodBuilder.Create();
  clickInstance.MoveNext();
}
No loop code. No enabling or disabling of the button. Where's the code I wrote? Notice the first thing this code does is create an instance of a class called button1ClickCode. This is a compiler-generated class that contains the code I originally put in the event handler, along with a bunch of state-based mechanics to handle asynchrony.
It's important to notice a few key things here. First off, this code is creating a new object. The Microsoft .NET Framework is pretty quick at allocating objects, but not without cost. This doesn't mean you should avoid Async. Quite the opposite: Writing code to handle this asynchronously without the Async framework might require even more objects to be created. Just be aware that this happens, and try not to make a bunch of fine-grained Async methods. Instead, opt for larger Async methods.
The next thing to notice is that the arguments of the event handler ("sender" and "e") are passed along to the button1ClickCode instance. Every local variable is "lifted" to this class. This is necessary because the code I wrote (which gets manipulated and placed in the special button1ClickCode class) probably uses those locals and, therefore, needs access to them. If I look at the generated code for the button1ClickCode class, I'll see:
  • A Form field, which has a reference to my form.
  • An object field, which has a reference to my "sender" argument.
  • An EventArgs field, which has a reference to my "e" argument.
  • A DateTime field that represents the "now" variable.
  • A field-level int to hold onto my "x" loop counter.
The compiler is creating a whole new object for this Async method (as I noted earlier). Now I see that this object's size can be affected by how I write my Async method. A bigger object means more pressure on memory, which leads to more garbage collections and decreased performance.
I can limit the size of that generated class by how I write my Async methods. In the previous example, I'm not using "sender" or "e" and I really don't need to store the current DateTime -- I can grab it each time I need it in the loop with DateTime.Now. So I rearrange my Click event handler as shown in Listing 1.
Now when I use ILSpy to check out the generated class with my event handler code, there's no more reference to "sender," "e" or "now." I've trimmed three fields and, therefore, the resulting class has a smaller memory footprint. Granted, this is just a small example, but knowing this is happening can help you write better Async code.
The compiler-generated class that runs my code in the background (and thus, asynchronously) has to handle exceptions. That means it's wrapped in a try/catch block and has to handle storing and re-throwing the exception back on my UI thread should an exception happen. Again, not super-expensive in terms of memory/clock cycles, but it's important to know what you're getting into and be aware of it.
Finally, note the call to AsyncVoidMethodBuilder.Create inside the Click event handler. This is more setup for Async support. It also has a cost. Take a look at the StateMatchingBuilding project in the sample code. I have two empty methods: one I call synchronously and another I call asynchronously. If I sit in a loop and call each method about 10 million times, my laptop takes about 11 percent to 15 percent longer for the Async calls. Don't write Async methods just because you can -- write them because they make sense for your solution.
Be Careful How You Wait
Another "gotcha" to watch out for is how you wait for an Async process to complete. Suppose I have the following method that does something and returns a Task:
public Task DoSomething()
{
  // Create and return Task that does something intensive
}
This method returns a Task, so there are two ways I can wait for it to finish. The best way would be to use the C# "await" keyword that I've been using:
public async void GoodWait()
{
  await DoSomething();
}
However, because DoSomething returns a Task, I could also just as easily use the Task Wait method:

public void BadWait()
{
  DoSomething().Wait();
}
The problem with the Wait method is that it's synchronous. The Task might be off doing something, but by calling Wait, my code sits right there inside the BadWait method until the Task completes. Imagine if this were in a Windows Forms app inside of a button click event. My UI would be locked waiting for the Task to complete.
On the other hand, by using the "await" keyword, a state machine is built to move my code into another class and run it asynchronously -- so the waiting actually happens asynchronously. No UI lockups, and it removes the possibility of deadlocks between the Async code and the caller that may be waiting for completion.
Cache Task Results When Possible
As I noted earlier, the C# compiler creates additional objects to handle the asynchronous implementation. More objects mean more pressure on the garbage collector. That, in turn, can have a negative impact on my application's performance. Here's another case where a few tweaks give me more performance from my code.
Let's say I have an application that has to check about 100 Web sites to see if they're up and running. Network calls and possible timeouts could negatively affect my application's responsiveness, so I'm going to do the site checks asynchronously.
For this example, I don't want to actually make 100 network calls, so I have a simple way to return a consistent set of data (see the project "CacheResults" in the sample code):
public static async Task SiteIsUpAsync(string url)
{
  return url.Length % 2 == 0;
}
The issue with this sample code is that every call to this method will result in either a true or a false result, but I'm creating a new Task for every call. This approach is going to create a lot of extra objects and put more pressure on the garbage collector.
Instead, I could cache an instance of Task for the "true" result and another Task for the "false" result. This approach only adds two objects and greatly reduces the amount of work the garbage collector has to do. The code is a little more involved, but the impact is huge, as shown in Listing 2.
When the Listing 2 code runs in a loop that checks 100 sites 100,000 times, my laptop gives me about a 55 percent to 60 percent increase in performance by caching the results (instead of returning a new result each time). Anytime you have results from an Async method that may be repeated from call to call, consider caching the results instead of creating a new result for each invocation.
The Microsoft Visual Studio Async framework is a great tool for your tool belt. Just make sure you understand some of the inner workings of the technology -- then you'll really see the benefits that asynchronous programming can bring to your applications.

Could not find a part of the path ... bin\roslyn\csc.exe

I am trying to run an ASP.NET MVC (model-view-controller) project retrieved from TFS (Team Foundation Server) source control. I have added a...