Tuesday, August 28, 2012
Free eBooks for Data Developers
Free eBooks for Data Developers: What are some of your favorite development resources? Chime in and share what you use, help others out.
Tuesday, August 21, 2012
Releasing Microsoft ASP.NET Universal Providers Core 1.2
I hope everyone has been busy downloading Visual Studio 2012 and have
started building awesome web applications. We have been busy too during
this time.
We have just released an update to the version of Universal Providers that shipped with VS2012. Following are the key changes with this release
You can follow this documentation on how to update a nuget package.
While updating you will get the EntityFramework 5.0.0 package as well from the nuget.org
Cross posted to http://blogs.msdn.com/b/pranav_rastogi/archive/2012/08/20/releasing-microsoft-asp-net-universal-providers-core-1-2.aspx
We have just released an update to the version of Universal Providers that shipped with VS2012. Following are the key changes with this release
- Address key performance issues with the providers
- Depends on EntityFramework Code First
What should you do
This version of Universal Providers Core that shipped with VS2012 was 1.1. Take a moment and update to 1.2You can follow this documentation on how to update a nuget package.
While updating you will get the EntityFramework 5.0.0 package as well from the nuget.org
Some common FAQ
- Works on .NET v4.0/4.5
- Works on VS2010/VS2012
- 1.2 is compatible with 1.1
- ASP.NET forums http://forums.asp.net/25.aspx/1?Security
- Stack Overflow: Use the tag “membership” Following are some questions for universal providers on SO http://stackoverflow.com/search?q=universal+providers
Cross posted to http://blogs.msdn.com/b/pranav_rastogi/archive/2012/08/20/releasing-microsoft-asp-net-universal-providers-core-1-2.aspx
Using Dynamic Data with Entity Framework DbContext
In Visual Studio 2012, if you create a ADO.NET Data Model then the generated Context class derives from a type called DbContext instead of ObjectContext. DbContext is also used when you are using EntityFramework Code First
This post outlines the changes you have to do to your Dynamicdata project template if you want your context to derive from DbContext
This post outlines the changes you have to do to your Dynamicdata project template if you want your context to derive from DbContext
1. Change Global.asax to get the ObjectContext
DefaultModel.RegisterContext(() =>
{
return ((IObjectContextAdapter)new YourContextType()).ObjectContext;
}, new ContextConfiguration() { ScaffoldAllTables = true });
2. Change ManyToMany.ascx.cs in the Dynamicdata\Fieldtemplates folder
protected override void OnDataBinding(EventArgs e)
{
base.OnDataBinding(e);
object entity;
ICustomTypeDescriptor rowDescriptor = Row as ICustomTypeDescriptor;
if (rowDescriptor != null)
{
// Get the real entity from the wrapper
entity = rowDescriptor.GetPropertyOwner(null);
}
else
{
entity = Row;
}
// Get the collection and make sure it's loaded
var entityCollection = Column.EntityTypeProperty.GetValue(entity, null);
var realEntityCollection = entityCollection as RelatedEnd;
if (realEntityCollection != null && !realEntityCollection.IsLoaded)
{
realEntityCollection.Load();
}
// Bind the repeater to the list of children entities
Repeater1.DataSource = entityCollection;
Repeater1.DataBind();
}
public override Control DataControl
{
get
{
return Repeater1;
}
}
3. Change ManyToMany_Edit.ascx.cs in the Dynamicdata\Fieldtemplates folder
protected ObjectContext ObjectContext { get; set; }
public void Page_Load(object sender, EventArgs e)
{
// Register for the DataSource's updating event
EntityDataSource ds = (EntityDataSource)this.FindDataSourceControl();
ds.ContextCreated += (_, ctxCreatedEnventArgs) => ObjectContext = ctxCreatedEnventArgs.Context;
// This field template is used both for Editing and Inserting
ds.Updating += new EventHandler(DataSource_UpdatingOrInserting);
ds.Inserting += new EventHandler(DataSource_UpdatingOrInserting);
}
void DataSource_UpdatingOrInserting(object sender, EntityDataSourceChangingEventArgs e)
{
MetaTable childTable = ChildrenColumn.ChildTable;
// Comments assume employee/territory for illustration, but the code is generic
if (Mode == DataBoundControlMode.Edit)
{
ObjectContext.LoadProperty(e.Entity, Column.Name);
}
// Get the collection and make sure it's loaded
dynamic entityCollection = Column.EntityTypeProperty.GetValue(e.Entity, null);
// Go through all the territories (not just those for this employee)
foreach (dynamic childEntity in childTable.GetQuery(e.Context))
{
// Check if the employee currently has this territory
var isCurrentlyInList = ListContainsEntity(childTable, entityCollection, childEntity);
// Find the checkbox for this territory, which gives us the new state
string pkString = childTable.GetPrimaryKeyString(childEntity);
ListItem listItem = CheckBoxList1.Items.FindByValue(pkString);
if (listItem == null)
continue;
// If the states differs, make the appropriate add/remove change
if (listItem.Selected)
{
if (!isCurrentlyInList)
entityCollection.Add(childEntity);
}
else
{
if (isCurrentlyInList)
entityCollection.Remove(childEntity);
}
}
}
private static bool ListContainsEntity(MetaTable table, IEnumerable<object> list, object entity)
{
return list.Any(e => AreEntitiesEqual(table, e, entity));
}
private static bool AreEntitiesEqual(MetaTable table, object entity1, object entity2)
{
return Enumerable.SequenceEqual(table.GetPrimaryKeyValues(entity1), table.GetPrimaryKeyValues(entity2));
}
protected void CheckBoxList1_DataBound(object sender, EventArgs e)
{
MetaTable childTable = ChildrenColumn.ChildTable;
// Comments assume employee/territory for illustration, but the code is generic
IEnumerable<object> entityCollection = null;
if (Mode == DataBoundControlMode.Edit)
{
object entity;
ICustomTypeDescriptor rowDescriptor = Row as ICustomTypeDescriptor;
if (rowDescriptor != null)
{
// Get the real entity from the wrapper
entity = rowDescriptor.GetPropertyOwner(null);
}
else
{
entity = Row;
}
// Get the collection of territories for this employee and make sure it's loaded
entityCollection = (IEnumerable<object>)Column.EntityTypeProperty.GetValue(entity, null);
var realEntityCollection = entityCollection as RelatedEnd;
if (realEntityCollection != null && !realEntityCollection.IsLoaded)
{
realEntityCollection.Load();
}
}
// Go through all the territories (not just those for this employee)
foreach (object childEntity in childTable.GetQuery(ObjectContext))
{
// Create a checkbox for it
ListItem listItem = new ListItem(
childTable.GetDisplayString(childEntity),
childTable.GetPrimaryKeyString(childEntity));
// Make it selected if the current employee has that territory
if (Mode == DataBoundControlMode.Edit)
{
listItem.Selected = ListContainsEntity(childTable, entityCollection, childEntity);
}
CheckBoxList1.Items.Add(listItem);
}
}
public override Control DataControl
{
get
{
return CheckBoxList1;
}
}
At this point you should be good to run your application and use DbContext or EntityFramework Code First with Dynamicdata templates
Migration for user accounts from the SqlMembershipProvider to the Universal Providers
As you know ASP.Net SqlMembershipProvider / SqlRoleProvider only support Microsoft SQL Server and Microsoft SQL Server Express. There is no support for Microsoft SQL Azure and Microsoft SQL Server Compact. The
ASP.NET Universal Providers have been created in order to add support
for SQL Azure to be ready for cloud environments like Azure.
Here we will talk about how to migrate your existing project with the SqlMembershipProvider for your user accounts and passwords to the Universal Providers.
First, install the Universal Providers Nuget package. This will update the existing project to use Universal Providers. You can also migrate the existing user accounts and passwords from the SqlMembershipProvider to the Universal Providers using the instructions below.
Migrate all the accounts from the old tables to the new tables:
Here is a list of the settings for SqlMembershipProvider that should be mapped to the settings on the Universal Providers DefaultMembershipProvider:
1. Default setting in membership and SqlMembershipProvider (here are 2 examples for same results of settings):
Here we will talk about how to migrate your existing project with the SqlMembershipProvider for your user accounts and passwords to the Universal Providers.
First, install the Universal Providers Nuget package. This will update the existing project to use Universal Providers. You can also migrate the existing user accounts and passwords from the SqlMembershipProvider to the Universal Providers using the instructions below.
Migrate all the accounts from the old tables to the new tables:
- For Microsoft ASP.NET Universal Providers 1.1 /1.2, below is sample SQL scripts for the membership and role providers (this doesn’t cover the profile provider):
INSERT INTO dbo.Applications (ApplicationName, ApplicationId, Description)
SELECT ApplicationName, ApplicationId, Description FROM dbo.aspnet_ApplicationsGO
INSERT INTO dbo.Roles (ApplicationId, RoleId, RoleName, Description)
SELECT ApplicationId, RoleId, RoleName, Description FROM dbo.aspnet_RolesGO
INSERT INTO dbo.Users (ApplicationId, UserId, UserName, IsAnonymous, LastActivityDate)
SELECT ApplicationId, UserId, UserName, IsAnonymous, LastActivityDate FROM dbo.aspnet_UsersGO
After all the accounts are migrated from the old tables to the new tables, you could update the config setting for Universal Providers (if needed) to map to the appropriate settings on the SqlMembershipProvider. In this case, a password reset won’t be needed and existing users will still be able to logonINSERT INTO dbo.Memberships (ApplicationId, UserId, Password, PasswordFormat, PasswordSalt, Email, PasswordQuestion, PasswordAnswer, IsApproved, IsLockedOut, CreateDate, LastLoginDate, LastPasswordChangedDate, LastLockoutDate, FailedPasswordAttemptCount, FailedPasswordAttemptWindowStart, FailedPasswordAnswerAttemptCount, FailedPasswordAnswerAttemptWindowStart, Comment)
SELECT ApplicationId, UserId, Password, PasswordFormat, PasswordSalt, Email, PasswordQuestion, PasswordAnswer, IsApproved, IsLockedOut, CreateDate, LastLoginDate, LastPasswordChangedDate, LastLockoutDate, FailedPasswordAttemptCount, FailedPasswordAttemptWindowStart, FailedPasswordAnswerAttemptCount, FailedPasswordAnswerAttemptWindowStart, Comment FROM dbo.aspnet_MembershipGO
INSERT INTO dbo.UsersInRoles SELECT * FROM dbo.aspnet_UsersInRolesGO
Here is a list of the settings for SqlMembershipProvider that should be mapped to the settings on the Universal Providers DefaultMembershipProvider:
1. Default setting in membership and SqlMembershipProvider (here are 2 examples for same results of settings):
In SqlMembershipProvider, by default passwordCompatMode is Framework20.
In DefaultMembershipProvider, by default passwordCompatMode is Framework40.
SqlMembershipProvider DefaultMembershipProvider e.g. e.g.
2. Specified hashAlgorithmType setting in membership with SqlMembershipProvider (here are 2 examples for same results of settings):
In SqlMembershipProvider, specified hashAlgorithmType will be used, no matter what passwordCompatMode.
In DefaultMembershipProvider, because of Medium trust security issue on reading hashAlgorithmType setting in membership, only when passwordCompatMode is Framework40, specified hashAlgorithmType will be used.
">
SqlMembershipProvider DefaultMembershipProvider e.g. e.g. SHA256
" />
" />
" />
" />
3. Specified Framework40 passwordCompatMode in SqlMembershipProvider:
" />
SqlMembershipProvider DefaultMembershipProvider e.g. e.g.
Framework40
Links to ASP.NET WebAPI blog posts and Data Access blog posts
Here are some blog posts about ASP.NET WebAPI that we just released.
Enjoy web development!
- ASP.NET Web API Released and a Preview of What’s Next
- Introducing the ASP.NET Web API Help Page (Preview) [Video]
- OData support in ASP.NET Web API
- ASP.NET Web API Tracing (Preview)
- ASP.NET Data Access Content Map
- Choosing Data Access Options for ASP.NET Web Forms Applications
- Choosing a SQL Server Edition for ASP.NET Web Application Development
- SQL Server Connection Strings for ASP.NET Web Applications
- ASP.NET Data Access FAQ
Enjoy web development!
Profile specific web.config transforms and transform preview
When we released VS2010 we add support for web.config (XDT) transforms during publish/package. Note: From now on I’ll only use the word publish from now on but the full content relates to packaging as well.
In the original implementation when you published your web project the
web.config file would be transformed by the file
web.{Configuration}.config, where {Configuration} is the project build
configuration. For example Debug, or Release. If you publish on Release
and there exists a web.release.config we will take your web.config and
transform it with web.release.config before publishing.
In this case I have created a profile named Production and set the Configuration to Release. When I publish this project the following transformations will be applied (if the files exist) in this order.
In VS there is a right-click option on web.config for Add Config Transform, but we were not able to update the functionality of that to automatically create profile specific transforms. Don’t worry, it will be released soon with one of our updates for web tooling. For now you will need to create a new file with the correct name and add it to your project. Note: if you want it to show up nested under web.config you’ll need to add the metadataWeb.config to the item in the
.csproj/.vbproj file.
In my web.release.config I have left the default contents, which just removes the debug attribute. Here is what I see when I select this on web.release.config for my project.
You can see that in the image above we can see that the debug flag was indeed removed as expected.
In my web.production.config I have a transform which simply updates the email app setting value. Here is the really cool part when I preview the transform for web.production.config the previewer will look into the profile and determine the build configuration which has been configured, and it will ensure that transform is applied before the profile specific one. For example take a look at the result for web.production.config.
In the image above you can see the note that web.release.config was applied first followed by web.production.config. In the result we can see that web.release.config removed the debug flag and that web.production.config updated the email address value.
We also do a little bit to help out in case there are errors in either the web.config or a transform. You can see errors in the Output Window and double click it to go directly to where the error exists.
Note: Scott Hanselman has a 5 minute video showing this and other updates.
Another note: If you need to transform any file besides web.config during publish then install my extension SlowCheetah.
Cross posted to http://sedodream.com/2012/08/19/ProfileSpecificWebconfigTransformsAndTransformPreview.aspx
Sayed Ibrahim Hashimi | @SayedIHashimi
Cascading web.config transformations
In VS 2012 (as well as the publishing updates for VS2010 through the Azure SDK) now support the concept of publish specific transforms. You can also now specify the project configuration used for a profile when publishing on the publish dialog.In this case I have created a profile named Production and set the Configuration to Release. When I publish this project the following transformations will be applied (if the files exist) in this order.
- web.release.config
- web.production.config
In VS there is a right-click option on web.config for Add Config Transform, but we were not able to update the functionality of that to automatically create profile specific transforms. Don’t worry, it will be released soon with one of our updates for web tooling. For now you will need to create a new file with the correct name and add it to your project. Note: if you want it to show up nested under web.config you’ll need to add the metadata
web.config transform preview
Previously the only way to test the functionality for these transformation was to actually publish or package the web project. This gets old pretty quick. In order to simplify creating these transforms we have introduced the Preview Transform menu option. This is the coolest feature in VS 2012 (OK I’m a bit biased, but still its the coolest).In my web.release.config I have left the default contents, which just removes the debug attribute. Here is what I see when I select this on web.release.config for my project.
You can see that in the image above we can see that the debug flag was indeed removed as expected.
In my web.production.config I have a transform which simply updates the email app setting value. Here is the really cool part when I preview the transform for web.production.config the previewer will look into the profile and determine the build configuration which has been configured, and it will ensure that transform is applied before the profile specific one. For example take a look at the result for web.production.config.
In the image above you can see the note that web.release.config was applied first followed by web.production.config. In the result we can see that web.release.config removed the debug flag and that web.production.config updated the email address value.
We also do a little bit to help out in case there are errors in either the web.config or a transform. You can see errors in the Output Window and double click it to go directly to where the error exists.
Note: Scott Hanselman has a 5 minute video showing this and other updates.
Another note: If you need to transform any file besides web.config during publish then install my extension SlowCheetah.
Cross posted to http://sedodream.com/2012/08/19/ProfileSpecificWebconfigTransformsAndTransformPreview.aspx
Sayed Ibrahim Hashimi | @SayedIHashimi
Friday, August 17, 2012
Performance Tips for Asynchronous Development in C#
GET CODE HERE
In a recent online C# Corner column, "Exceptional Async Handling with Visual Studio Async CTP 3", I showed how the Visual Studio Async CTP (version 3), which extends Visual Studio 2010 SP1, handles aggregating exceptions that happen in background, asynchronous methods. In this column, I'm going to cover the mechanics of the Async framework and offer some tips on maximizing its performance.
Breaking Up Is Hard to Do
A deep dive of exactly how the C# compiler implements Async is beyond the scope of this article. Instead, I'll highlight how the compiler breaks up and rearranges my code, so that I can write it in a synchronous fashion but still have the runtime execute it asynchronously.
Here's a simple example in a Windows Forms application (project "BreakingUpAsync" in the sample code). I have a single button on my form and when I click it, the form's caption will display the current time for the next 15 seconds:
Remove the "async" keyword, the "await" keyword and change TaskEx.Delay(1000) to Thread.Sleep(1000), and without Async support, I'd lock up the UI. See my previous column, "Multithreading in WinForms"), for more details. However, thanks to Async support, this code runs just fine with a fully responsive UI. How?
First, I pull out ILSpy, an open source .NET assembly browser and decompiler. ILSpy makes inspecting the IL generated by the C# compiler much easier. If you're a fan of IL, just use the MSIL Disassembler (Ildasm.exe).
Here's what my button1_Click event handler looks like after it's compiled (I've massaged the names a bit because the type names generated by the compiler can be pretty ugly to read):
It's important to notice a few key things here. First off, this code is creating a new object. The Microsoft .NET Framework is pretty quick at allocating objects, but not without cost. This doesn't mean you should avoid Async. Quite the opposite: Writing code to handle this asynchronously without the Async framework might require even more objects to be created. Just be aware that this happens, and try not to make a bunch of fine-grained Async methods. Instead, opt for larger Async methods.
The next thing to notice is that the arguments of the event handler ("sender" and "e") are passed along to the button1ClickCode instance. Every local variable is "lifted" to this class. This is necessary because the code I wrote (which gets manipulated and placed in the special button1ClickCode class) probably uses those locals and, therefore, needs access to them. If I look at the generated code for the button1ClickCode class, I'll see:
I can limit the size of that generated class by how I write my Async methods. In the previous example, I'm not using "sender" or "e" and I really don't need to store the current DateTime -- I can grab it each time I need it in the loop with DateTime.Now. So I rearrange my Click event handler as shown in Listing 1.
Now when I use ILSpy to check out the generated class with my event handler code, there's no more reference to "sender," "e" or "now." I've trimmed three fields and, therefore, the resulting class has a smaller memory footprint. Granted, this is just a small example, but knowing this is happening can help you write better Async code.
The compiler-generated class that runs my code in the background (and thus, asynchronously) has to handle exceptions. That means it's wrapped in a try/catch block and has to handle storing and re-throwing the exception back on my UI thread should an exception happen. Again, not super-expensive in terms of memory/clock cycles, but it's important to know what you're getting into and be aware of it.
Finally, note the call to AsyncVoidMethodBuilder.Create inside the Click event handler. This is more setup for Async support. It also has a cost. Take a look at the StateMatchingBuilding project in the sample code. I have two empty methods: one I call synchronously and another I call asynchronously. If I sit in a loop and call each method about 10 million times, my laptop takes about 11 percent to 15 percent longer for the Async calls. Don't write Async methods just because you can -- write them because they make sense for your solution.
Be Careful How You Wait
Another "gotcha" to watch out for is how you wait for an Async process to complete. Suppose I have the following method that does something and returns a Task:
On the other hand, by using the "await" keyword, a state machine is built to move my code into another class and run it asynchronously -- so the waiting actually happens asynchronously. No UI lockups, and it removes the possibility of deadlocks between the Async code and the caller that may be waiting for completion.
Cache Task Results When Possible
As I noted earlier, the C# compiler creates additional objects to handle the asynchronous implementation. More objects mean more pressure on the garbage collector. That, in turn, can have a negative impact on my application's performance. Here's another case where a few tweaks give me more performance from my code.
Let's say I have an application that has to check about 100 Web sites to see if they're up and running. Network calls and possible timeouts could negatively affect my application's responsiveness, so I'm going to do the site checks asynchronously.
For this example, I don't want to actually make 100 network calls, so I have a simple way to return a consistent set of data (see the project "CacheResults" in the sample code):
for every call. This approach is going to create a lot
of extra objects and put more pressure on the garbage collector.
Instead, I could cache an instance of Task for the "true"
result and another Task for the "false" result. This
approach only adds two objects and greatly reduces the amount of work
the garbage collector has to do. The code is a little more involved, but
the impact is huge, as shown in Listing 2.
When the Listing 2 code runs in a loop that checks 100 sites 100,000 times, my laptop gives me about a 55 percent to 60 percent increase in performance by caching the results (instead of returning a new result each time). Anytime you have results from an Async method that may be repeated from call to call, consider caching the results instead of creating a new result for each invocation.
The Microsoft Visual Studio Async framework is a great tool for your tool belt. Just make sure you understand some of the inner workings of the technology -- then you'll really see the benefits that asynchronous programming can bring to your applications.
In a recent online C# Corner column, "Exceptional Async Handling with Visual Studio Async CTP 3", I showed how the Visual Studio Async CTP (version 3), which extends Visual Studio 2010 SP1, handles aggregating exceptions that happen in background, asynchronous methods. In this column, I'm going to cover the mechanics of the Async framework and offer some tips on maximizing its performance.
Breaking Up Is Hard to Do
A deep dive of exactly how the C# compiler implements Async is beyond the scope of this article. Instead, I'll highlight how the compiler breaks up and rearranges my code, so that I can write it in a synchronous fashion but still have the runtime execute it asynchronously.
Here's a simple example in a Windows Forms application (project "BreakingUpAsync" in the sample code). I have a single button on my form and when I click it, the form's caption will display the current time for the next 15 seconds:
private async void button1_Click(object sender, EventArgs e) { var now = DateTime.Now; button1.Enabled = false; for (var x = 0; x < 15; x++) { this.Text = now.AddSeconds(x).ToString("HH:mm:ss"); await TaskEx.Delay(1000); } button1.Enabled = true; }Nothing fancy here. I disable the button at the start of the loop. Inside the loop, I update the form's caption and wait for one second. Finally, I re-enable the button.
Remove the "async" keyword, the "await" keyword and change TaskEx.Delay(1000) to Thread.Sleep(1000), and without Async support, I'd lock up the UI. See my previous column, "Multithreading in WinForms"), for more details. However, thanks to Async support, this code runs just fine with a fully responsive UI. How?
First, I pull out ILSpy, an open source .NET assembly browser and decompiler. ILSpy makes inspecting the IL generated by the C# compiler much easier. If you're a fan of IL, just use the MSIL Disassembler (Ildasm.exe).
Here's what my button1_Click event handler looks like after it's compiled (I've massaged the names a bit because the type names generated by the compiler can be pretty ugly to read):
private void button1_Click(object sender, EventArgs e) { Form1.button1ClickCode clickInstance = new Form1.button1ClickCode(0); clickInstance.<>4__this = this; clickInstance.sender = sender; clickInstance.e = e; clickInstance.<>t__MoveNextDelegate = new Action(clickInstance.MoveNext); clickInstance.$builder = AsyncVoidMethodBuilder.Create(); clickInstance.MoveNext(); }No loop code. No enabling or disabling of the button. Where's the code I wrote? Notice the first thing this code does is create an instance of a class called button1ClickCode. This is a compiler-generated class that contains the code I originally put in the event handler, along with a bunch of state-based mechanics to handle asynchrony.
It's important to notice a few key things here. First off, this code is creating a new object. The Microsoft .NET Framework is pretty quick at allocating objects, but not without cost. This doesn't mean you should avoid Async. Quite the opposite: Writing code to handle this asynchronously without the Async framework might require even more objects to be created. Just be aware that this happens, and try not to make a bunch of fine-grained Async methods. Instead, opt for larger Async methods.
The next thing to notice is that the arguments of the event handler ("sender" and "e") are passed along to the button1ClickCode instance. Every local variable is "lifted" to this class. This is necessary because the code I wrote (which gets manipulated and placed in the special button1ClickCode class) probably uses those locals and, therefore, needs access to them. If I look at the generated code for the button1ClickCode class, I'll see:
- A Form field, which has a reference to my form.
- An object field, which has a reference to my "sender" argument.
- An EventArgs field, which has a reference to my "e" argument.
- A DateTime field that represents the "now" variable.
- A field-level int to hold onto my "x" loop counter.
I can limit the size of that generated class by how I write my Async methods. In the previous example, I'm not using "sender" or "e" and I really don't need to store the current DateTime -- I can grab it each time I need it in the loop with DateTime.Now. So I rearrange my Click event handler as shown in Listing 1.
Now when I use ILSpy to check out the generated class with my event handler code, there's no more reference to "sender," "e" or "now." I've trimmed three fields and, therefore, the resulting class has a smaller memory footprint. Granted, this is just a small example, but knowing this is happening can help you write better Async code.
The compiler-generated class that runs my code in the background (and thus, asynchronously) has to handle exceptions. That means it's wrapped in a try/catch block and has to handle storing and re-throwing the exception back on my UI thread should an exception happen. Again, not super-expensive in terms of memory/clock cycles, but it's important to know what you're getting into and be aware of it.
Finally, note the call to AsyncVoidMethodBuilder.Create inside the Click event handler. This is more setup for Async support. It also has a cost. Take a look at the StateMatchingBuilding project in the sample code. I have two empty methods: one I call synchronously and another I call asynchronously. If I sit in a loop and call each method about 10 million times, my laptop takes about 11 percent to 15 percent longer for the Async calls. Don't write Async methods just because you can -- write them because they make sense for your solution.
Be Careful How You Wait
Another "gotcha" to watch out for is how you wait for an Async process to complete. Suppose I have the following method that does something and returns a Task:
public Task DoSomething() { // Create and return Task that does something intensive }This method returns a Task, so there are two ways I can wait for it to finish. The best way would be to use the C# "await" keyword that I've been using:
public async void GoodWait() { await DoSomething(); }However, because DoSomething returns a Task, I could also just as easily use the Task Wait method:
public void BadWait() { DoSomething().Wait(); }The problem with the Wait method is that it's synchronous. The Task might be off doing something, but by calling Wait, my code sits right there inside the BadWait method until the Task completes. Imagine if this were in a Windows Forms app inside of a button click event. My UI would be locked waiting for the Task to complete.
On the other hand, by using the "await" keyword, a state machine is built to move my code into another class and run it asynchronously -- so the waiting actually happens asynchronously. No UI lockups, and it removes the possibility of deadlocks between the Async code and the caller that may be waiting for completion.
Cache Task Results When Possible
As I noted earlier, the C# compiler creates additional objects to handle the asynchronous implementation. More objects mean more pressure on the garbage collector. That, in turn, can have a negative impact on my application's performance. Here's another case where a few tweaks give me more performance from my code.
Let's say I have an application that has to check about 100 Web sites to see if they're up and running. Network calls and possible timeouts could negatively affect my application's responsiveness, so I'm going to do the site checks asynchronously.
For this example, I don't want to actually make 100 network calls, so I have a simple way to return a consistent set of data (see the project "CacheResults" in the sample code):
public static async TaskThe issue with this sample code is that every call to this method will result in either a true or a false result, but I'm creating a new TaskSiteIsUpAsync(string url) { return url.Length % 2 == 0; }
Instead, I could cache an instance of Task
When the Listing 2 code runs in a loop that checks 100 sites 100,000 times, my laptop gives me about a 55 percent to 60 percent increase in performance by caching the results (instead of returning a new result each time). Anytime you have results from an Async method that may be repeated from call to call, consider caching the results instead of creating a new result for each invocation.
The Microsoft Visual Studio Async framework is a great tool for your tool belt. Just make sure you understand some of the inner workings of the technology -- then you'll really see the benefits that asynchronous programming can bring to your applications.
The New Read-Only Collections in .NET 4.5
Eric Vogel covers some practical uses
for the long awaited interfaces, IReadOnlyList and IReadOnlyDictionary
in .NET Framework 4.5.
- By Eric Vogel
- 08/08/2012
The IReadOnlyCollection interface, which forms the base of the IReadOnlyList and IReadOnlyDictionary classes, is defined as IReadOnlyCollection
Prior to .NET 4.5, the primary covariant collection interface was IEnumerable
A common scenario you may run into, is storing a list of people or employees. The application may be a case or customer relationship management system. Either way, you're dealing with similar class representations. For example, if you have a Person class that contains FirstName and LastName properties (Listing 1), and an Employee subclass that adds EIN and Salary properties (Listing 2). This is a very simplified view of a business domain, but it gets the picture across.
You could then create a typed list of Employee objects and access them as a read-only collection using the new interfaces. In a real-world application, your employee list is likely to be quite large and retrieved from a database.
ListThe IReadOnlyCollection is the most basic read-only collection interface and provides a Count property on top of its inherent IEnumerable members. For example, you could store a read-only view of employees for a directory listing and easily retrieve the number of people.employees = new List () { new Employee() { EIN = 1, FirstName = "John", LastName = "Doe", Salary= 55000M }, new Employee() { EIN = 2, FirstName = "Jane", LastName = "Doe", Salary= 55000M }, new Employee() { EIN = 3, FirstName = "Don", LastName = "DeLuth", Salary= 55000M }, };
IReadOnlyCollectionThe IReadOnlyList interface is the same as IReadOnlyCollection with the addition of an item indexer.directory = employees; int numStaff = directory.Count;
IReadOnlyListThe IReadOnlyList would be well-suited for a read-only grid display of the needed items.staff = employees; Person firstHire = staff[0];
The IReadOnlyDictionary interface, as its name suggests, provides a read-only view of the Dictionary class. The accessible Dictionary class members include the Keys, Values and key indexer properties, in addition to the ContainsKey and TryGetValue methods.
DictionaryThe IReadOnlyDictionary interface could prove useful for validation, as you would not need to modify the items but may want to quickly access them via a key, such as a control identifier.einLookUp = employees.ToDictionary(x => x.EIN); IReadOnlyDictionary readOnlyStaff = einLookUp; var eins = readOnlyStaff.Keys; var allEmployees = readOnlyStaff.Values; var secondStaff = readOnlyStaff[2]; bool haveThirdEin = readOnlyStaff.ContainsKey(3); Employee test; bool fourthExists = readOnlyStaff.TryGetValue(4, out test);
As you can see, there are many uses for the new read-only collection interfaces. Primarily, they can be used to clean up your application's API to indicate that a method or class should not modify the contents of a collection that it is accessing. One caveat to note is that the interfaces do not provide an immutable copy of the collection but rather a read-only view of the source mutable collection.
About the Author
Eric Vogel is a Software Developer at Red Cedar
Solutions Group in Okemos, MI. He is the president of the Greater
Lansing User Group for .NET. Eric enjoys learning about software
architecture and craftsmanship and is always looking for ways to create
more robust and testable applications. Contact him at
eric.vogel@rcsg.net.
Monday, August 6, 2012
Rich JavaScript Applications – the Seven Frameworks (Throne of JS, 2012)
Rich JavaScript Applications – the Seven Frameworks (Throne of JS, 2012):
A week ago was the Throne of JS conference in Toronto, perhaps the most interesting and different conference I’ve been to for a while. Quoting its website:
Disclaimer: I was there to represent Knockout, so obviously I’m not neutral. In this post my focus is on what the creators said about the scope and philosophy of their technologies, and not so much on whether I agree or disagree.
* Yes, I know that’s eight frameworks, not seven. This part was never fully explained…
Agreement: Progressive enhancement isn’t for building real apps.
All the technologies follow from the view that serious JavaScript applications require proper data models and ability to do client-side rendering, not just server rendering plus some Ajax and jQuery code.
Quote from Jeremy Ashkenas, the Backbone creator: “At this point, saying ‘single-page application’ is like saying ‘horseless carriage’” (i.e., it’s not even a novelty any more).
Agreement: Model-View-Whatever.
All the technologies made use of model-view separation. Some specifically talked about MVC, some about MVVM, and some specifically refused to define the third piece (just saying it’s models, views, and some kind of application thing that makes them work together). The net result in each case was similar.
Agreement: Data binding is good.
All except Backbone and Spine have a built-in notion of declarative data binding in their views (Backbone instead has a “bring your own view technology” design).
Agreement: IE 6 is dead already.
In a panel discussion, most framework creators said their IE support focus was limited to version 7+ (in fact, Ember and AngularJS only go for IE8, and Batman requires an ES5 shim to run on IE older than v9). This is the way of things to come: even jQuery 2 is set to drop support for IE older than v9.
The only stalwarts here appear to be Backbone and Knockout which support IE6+ (I don’t know about Backbone’s internals, but for KO this means transparently working around a lot of crazy edge-case IE6/7 rendering and eventing weirdnesses).
Agreement: Licensing and source control.
Every single one is MIT licensed and hosted on GitHub.
Disagreement: Libraries vs frameworks.
This is the biggest split right now. You could group them as follows:
Numbers in brackets are a point-in-time snapshot of the number of GitHub watchers, as a crude indicator of relative influence.
What does this mean?
Note that AngularJS is arguably somewhere in between library and framework: it doesn’t require a particular layout of files at development time (library-like), but at runtime it provides an “app lifecycle” that you fit your code into (framework-like). I’m listing it as a framework because that’s the terminology the AngularJS team prefers.
Disagreement: What’s flexible, what’s integrated.
Each technology has different levels of prescriptiveness:
As expected, whenever a library leaves a decision open, they argue it is better overall to guarantee composablity with arbitrary 3rd-party libraries. And the obvious counter-argument is that integration can be more seamless if built-in. Again, based on my conversations, the audience was split and opinions went in all directions — usually based on how much other technology stack an individual was wedded to.
Quote from Tom Dale of Ember: “We do a lot of magic, but it’s good magic, which means it decomposes into sane primitives.“
Disagreement: String-based vs DOM-based templates
(As shown in the above table.) For string-based templates, almost everyone used Handlebars.js as the template engine, which seems to dominate this space, though CanJS used EJS. Arguments in favour of string-based templates include “it’s faster” (debatable) and “theoretically, the server can render them too” (also debatable, as that’s only true if you can actually run all of your model code on the server, and nobody actually does that in practice).
DOM-based templates means doing control flow (each, if, etc.) purely via bindings in your actual markup and not relying on any external templating library. Argument include “it’s faster” (debatable) and “the code is easier to read and write, because there’s no weird chasm between markup and templates, and it’s obvious how CSS will interact with it“.
In my view, the strongest argument here came from the AngularJS guys who stated that in the near future, they expect DOM-based templating will be native in browsers, so we’ll best prepare ourselves for the future by adopting it now. AngularJS is from Google, so they are already working on this with Chromium and standards bodies.
Disagreement: Levels of server-agnosticism
Batman and Meteor express explicit demands on the server: Batman is designed for Rails, and Meteor is its own server. Most others have a goal of being indifferent to what’s on your server, but in practice the architecture, conventions, and some tooling in Ember leans towards Rails developers. Ember absolutely works on other server technologies too, though today it takes a little more manual setup.
A week ago was the Throne of JS conference in Toronto, perhaps the most interesting and different conference I’ve been to for a while. Quoting its website:
It’s no longer good enough to build web apps around full page loads and then “progressively enhance” them to behave more dynamically. Building apps which are fast, responsive and modern require you to completely rethink your approach.The premise was to take the seven top JavaScript frameworks/libraries for single-page and rich JavaScript applications — AngularJS, Backbone, Batman, CanJS, Ember, Meteor, Knockout, Spine — get the creators of all of them in one location, and compare the technologies head to head.*
Disclaimer: I was there to represent Knockout, so obviously I’m not neutral. In this post my focus is on what the creators said about the scope and philosophy of their technologies, and not so much on whether I agree or disagree.
* Yes, I know that’s eight frameworks, not seven. This part was never fully explained…
TL;DR Executive Summary
- For many web developers, it’s now taken for granted that such client-side frameworks are the way to build rich web apps. If you’re not using one, you’re either not building an application, or you’re just missing out.
- There’s lots of consensus among the main frameworks about how to do it (Model-View-* architecture, declarative bindings, etc. — details below), so to some extent you get similar benefits whichever you choose.
- Some major philosophical differences remain, especially the big split between frameworks and libraries. Your choice will deeply influence your architecture.
- The conference itself was stylish and upbeat, with a lot of socialising and conversations across different technology groups. I’d like to see more like this.
Technologies: Agreement and Disagreement
As each SPA technology was presented, some fairly clear patterns of similarity and difference emerged.Agreement: Progressive enhancement isn’t for building real apps.
All the technologies follow from the view that serious JavaScript applications require proper data models and ability to do client-side rendering, not just server rendering plus some Ajax and jQuery code.
Quote from Jeremy Ashkenas, the Backbone creator: “At this point, saying ‘single-page application’ is like saying ‘horseless carriage’” (i.e., it’s not even a novelty any more).
Agreement: Model-View-Whatever.
All the technologies made use of model-view separation. Some specifically talked about MVC, some about MVVM, and some specifically refused to define the third piece (just saying it’s models, views, and some kind of application thing that makes them work together). The net result in each case was similar.
Agreement: Data binding is good.
All except Backbone and Spine have a built-in notion of declarative data binding in their views (Backbone instead has a “bring your own view technology” design).
Agreement: IE 6 is dead already.
In a panel discussion, most framework creators said their IE support focus was limited to version 7+ (in fact, Ember and AngularJS only go for IE8, and Batman requires an ES5 shim to run on IE older than v9). This is the way of things to come: even jQuery 2 is set to drop support for IE older than v9.
The only stalwarts here appear to be Backbone and Knockout which support IE6+ (I don’t know about Backbone’s internals, but for KO this means transparently working around a lot of crazy edge-case IE6/7 rendering and eventing weirdnesses).
Agreement: Licensing and source control.
Every single one is MIT licensed and hosted on GitHub.
Disagreement: Libraries vs frameworks.
This is the biggest split right now. You could group them as follows:
Libraries | Frameworks |
Backbone (9552) Knockout (2357) Spine (2017) CanJS (321) | Ember (3993) AngularJS (2925) Batman (958) Meteor (4172) — unusual, see later |
What does this mean?
- Libraries slot into your existing architecture and add specific functionality
- Frameworks give you an architecture (file structure, etc.) that you are meant to follow and, if you do, are intended to handle all common requirements
Note that AngularJS is arguably somewhere in between library and framework: it doesn’t require a particular layout of files at development time (library-like), but at runtime it provides an “app lifecycle” that you fit your code into (framework-like). I’m listing it as a framework because that’s the terminology the AngularJS team prefers.
Disagreement: What’s flexible, what’s integrated.
Each technology has different levels of prescriptiveness:
Views | URL routing | Data storage | |
AngularJS | Built-in DOM-based templates (mandatory) | Built-in (optional) | Built-in system (optional) |
Backbone | Choose your own (most used handlebars.js, a string-based template library) | Built-in (optional) | Built-in (overridable) |
Batman | Built-in DOM-based templates (mandatory) | Built-in (mandatory) | Built-in system (mandatory) |
CanJS | Built-in string-based templates (mandatory) | Built in (optional) | Built in (optional) |
Ember | Built-in string-based templates (mandatory) | Built-in (mandatory) | Built-in (overridable) |
Knockout | Built-in DOM-based templates (optional, can do string-based too) | Choose your own (most use sammy.js or history.js) | Choose your own (e.g., knockout.mapping or just $.ajax) |
Meteor | Built-in string-based templates (mandatory) | Built-in (mandatory?) | Built-in (Mongo, mandatory) |
Spine | Choose your own string-based templates | Built-in (optional) | Built-in (optional?) |
Quote from Tom Dale of Ember: “We do a lot of magic, but it’s good magic, which means it decomposes into sane primitives.“
Disagreement: String-based vs DOM-based templates
(As shown in the above table.) For string-based templates, almost everyone used Handlebars.js as the template engine, which seems to dominate this space, though CanJS used EJS. Arguments in favour of string-based templates include “it’s faster” (debatable) and “theoretically, the server can render them too” (also debatable, as that’s only true if you can actually run all of your model code on the server, and nobody actually does that in practice).
DOM-based templates means doing control flow (each, if, etc.) purely via bindings in your actual markup and not relying on any external templating library. Argument include “it’s faster” (debatable) and “the code is easier to read and write, because there’s no weird chasm between markup and templates, and it’s obvious how CSS will interact with it“.
In my view, the strongest argument here came from the AngularJS guys who stated that in the near future, they expect DOM-based templating will be native in browsers, so we’ll best prepare ourselves for the future by adopting it now. AngularJS is from Google, so they are already working on this with Chromium and standards bodies.
Disagreement: Levels of server-agnosticism
Batman and Meteor express explicit demands on the server: Batman is designed for Rails, and Meteor is its own server. Most others have a goal of being indifferent to what’s on your server, but in practice the architecture, conventions, and some tooling in Ember leans towards Rails developers. Ember absolutely works on other server technologies too, though today it takes a little more manual setup.
The technologies — quick overview
Here’s a rundown of the basic details of each technology covered:Backbone
- Who: Jeremy Ashkenas and DocumentCloud
- What:
- Model-View in JavaScript, MIT licensed
- Most minimal of all the libraries — only one file, 800 lines of code!
- Extremely tightly-scoped functionality — just provides REST-persistable models with simple routing and callbacks so you know when to render views (you supply your own view-rendering mechanism).
- The best-known of them all, with the most production deployments on big-name sites (perhaps easy to adopt because it’s so minimal)
- Why:
- It’s so small, you can read and understand all of the source before you use it.
- No impact on your server architecture or file layout. Can work in a small section of your page — doesn’t need to control whole page.
- Jeremy seems to exist in a kind of zen state of calm, reasonable opinions about everything. He was like the grown up, supervising all the arguing kids.
- Where: GitHub and own site
- When: In production for nearly 2 years now
Meteor
- Who: The Meteor development group, who just raised $11.2 Million so they can do this full-time
- What:
- Crazy amazing framework from the future, barely reminiscent of anything you’ve ever seen (except perhaps Derby)
- Bridges a server-side runtime (on Node+Mongo) with a client-side one, so your code appears to run on both, including the database. WebSockets syncs between all client(s) and server.
- Does “live deployments” every time you edit your code – client-side runtimes are updated on the fly without losing their state
- Makes more sense if you watch the video
- Like everyone I spoke to at the event, I really want this to succeed — web development needs something this radical to move forwards
- Why: You’ve had enough of conventional web development and now want to live on the bleeding edge.
- Where: GitHub and own site
- When: It’s still early days; I don’t know if there are any production Meteor sites yet except built by the core team. They’re totally serious about doing this, though.
Ember
- Who: Yehuda Katz (formerly of jQuery and Rails), the Ember team, and Yehuda’s company Tilde
- What:
- Everything you need to build an “ambitious web application”, MIT license
- Biggest framework of them all in both functionality and code size
- Lots of thought has gone into how you can decompose your page into a hierarchy of controls, and how this ties in with a statemachine-powered hierarchical routing system
- Very sophisticated data access library (Ember.Data) currently in development
- Intended to control your whole page at runtime, so not suitable for use in small “islands of richness” on a wider page
- Pretty heavily opinionated about files, URLs, etc., but everything is overridable if you know how
- Design inspired by Rails and Cocoa
- Tooling: They supply project templates for Rails (but you can use other server platforms if you write the code manually)
- Why: Common problems should have common solutions — Ember makes all the common solutions so you only have to think about what’s unique to your own application
- Where: GitHub and own site
- When: Not yet at 1.0, but aiming for it soon. API will solidify then.
AngularJS
- Who: Developed by Google; used internally by them and MIT licensed.
- What:
- Model-View-Whatever in JavaScript, MIT licensed
- DOM-based templating with observability, declarative bindings, and an almost-MVVM code style (they say Model-View-Whatever)
- Basic URL routing and data persistence built in
- Tooling: they ship a Chrome debugger plugin that lets you explore your models while debugging, and a plugin for the Jasmine testing framework.
- Why:
- Conceptually, they say it’s a polyfill between what browsers can do today and what they will do natively in a few years (declarative binding and observability), so we should start coding this way right now
- No impact on your server architecture or file layout. Can work in a small section of your page — doesn’t need to control whole page.
- Where: GitHub and own site
- When: In production now (has been at Google for a while)
Knockout
- Who: The Knockout team and community (currently three on the core team, including me)
- What:
- Model-View-ViewModel (MVVM) in JavaScript, MIT licensed
- Tightly focused on rich UIs: DOM-based templates with declarative bindings, and observable models with automatic dependency detection
- Not opinionated about URL routing or data access — combines with arbitrary third-party libraries (e.g., Sammy.js for routing and plain ajax for storage)
- Big focus on approachability, with extensive documentation and interactive examples
- Why:
- Does one thing well (UI), right back to IE 6
- No impact on your server architecture or file layout. Can work in a small section of your page — doesn’t need to control whole page.
- Where: GitHub and own site
- When: In production for nearly 2 years now
Spine
- Who: Alex MacCaw
- What:
- MVC in JavaScript, MIT license
- Worked example originally written for an O’Reilly book grew into an actual OSS project
- Is a kind of modified clone of Backbone (hence the name)
- Why: You like Backbone, but want a few things to be different.
- Where: GitHub and own site
- When: It’s past v1.0.0 now
Batman
- Who: the team at Shopify (an eCommerce platform company)
- What:
- MVC in JavaScript, almost exclusively for Rails+CoffeeScript developers, MIT licensed
- Most opinionated of them all. You must follow their conventions (e.g., for file layout and URLs) or, as they say in their presentation,”go use another framework“
- Full-stack framework with pretty rich models, views, and controllers and routing. And observability mechanism of course.
- DOM-based templating.
- Why: If you use Rails and CoffeeScript, you’ll be right at home
- Where: GitHub and own site
- When: Currently at 0.9. Aiming for 1.0 in coming months.
CanJS
- Who: the team at Bitovi (a JavaScript consulting/training company)
- What:
- MVC in JavaScript, MIT licensed
- REST-persistable models, basic routing, string-based templating
- Not widely known (I hadn’t heard of it before last week), though is actually a reboot of the older JavaScriptMVC project
- Why: Aims to be the best of all worlds by delivering features similar to the above libraries while also being small
- Where: GitHub and own site
- When: Past 1.0 already
Summary
If you’re trying to make sense of which of these is a good starting point for your project, I’d suggest two questions areas to consider:- Scope. How much do you want a framework or library to do for you? Are you starting from blank and want a complete pre-prepared architecture to guide you from beginning to end? Or do you prefer to pick your own combinations of patterns and libraries? Either choice has value and is right for different projects and teams.
- Design aesthetic. Have you actually looked at code and tried building something small with each of your candidates? Do you like doing it? Don’t choose based on descriptions or feature lists alone: they’re relevant but limited. Ignoring your own subjective coding experience would be like picking a novel based on the number of chapters, or a spouse based on their resume/CV.
Subscribe to:
Posts (Atom)
Could not find a part of the path ... bin\roslyn\csc.exe
I am trying to run an ASP.NET MVC (model-view-controller) project retrieved from TFS (Team Foundation Server) source control. I have added a...
-
Building Custom Controls for Windows 8 Store apps : This article explains how to build custom controls for Windows Store apps, using XAML a...
-
Adding the New HTML Editor Extender to a Web Forms Application using NuGet : The July 2011 release of the Ajax Control Toolkit includes a ne...
-
September 2011 Release of the Ajax Control Toolkit : I’m happy to announce the release of the September 2011 Ajax Control Toolkit. This rele...