Wednesday, July 25, 2012

20 Best Tricks to Extend Laptop Battery Life


Recently, mobile technology has become increasingly sophisticated with the new components, the better chips, and a faster processors. But there’s one part of Laptop or Netbook which is the “Achilles heel” – the battery. Operating systems, modern graphics, and various other advanced application rapacious spend your laptop battery every day. The average laptop battery life remains in the range of 3 – 4 hours. Of course, this is still a problem for workers who often travel long distance.Most effective way at this time is to bring some spare batteries in your trip. But, beside this there are at least a few tricks that can help you to save laptop batteries a long time.
  1. Laptop HD defragmentation – Regular defragmentation can help you manage the data to be more efficient and make the hard drive (HD) a little faster to access the data. When HD works quickly, the more you can spare of laptop battery energy.
  2. Turn off High Power Applications – Immediately turn off the background process that is not important. Watch energy use through the Windows Task Manager [Ctrl + Alt + Del]. If you are not using the internet, then your laptop is safe to turn off programs that are not that important to work on the taskbar, such as antivirus and firewall. Run – msconfig – Tab: Startup and remove n the programs that you do not want to launch your laptop and reboot once.
  3. Suspend the Scheduled Tasks – This process can be defragmentation or virus scan, but make sure you schedule an activity at the time you are near the power source. If not, delay or combine alternately between them, to find the time and place to fill the battery.
  4. Unplug External Tools – Tools that use the USB can spent battery power. After use, do notforget to pull out all the external equipment such as mouse, PC card, Wi-Fi, an external speaker, Bluetooth, and even an iPod or iPhone.
  5. Clear Drive CD / DVD – Do not leave a CD / DVD any left in the laptop if it is not used.
  6. Use Local Hardware Equipment – Try not to use an external DVD or other equipment
  7. Remove lights – LCD laptop screen is also one of a battery waster. Configure light level to the lowest level that you can still see by using the Function or use the Display Settings in Control Panel.
  8. Turn off Voters – Turn off the speaker and avoid the use of multimedia software. Sound scheme also spent laptop battery.
  9. Do not Use a Screensaver
  10. Visit the Power Options – Know energy management / power through the application “Power Options” in Control Panel. Both XP and Vista have this advanced feature to turn off components such as monitors and / or HD after specific intervals. “Max Battery” in the Power schemes can be chosen to maximize battery life time.
  11. Use classic style GUI – On Vista, click on the “Desktop – Preferences – Color View – Appearance – Appearance Classic and Basic Windows graphical interface.” On XP, “Display Properties – Theme – Windows Classic”. Linux and Mac have a better system in this case to maintain the battery.
  12. Hibernate is Better of Sleep – On the Stand By mode (or Sleep Mode), the computer will shut off and the HD display but the memory remains active while the CPU slowed. This is of course the battery. Meanwhile, hibernate is better because laptop save all of the conditions and the last off the total to save energy.
  13. Do not run a lot of programs simultaneously – Working with many programs, of course, require more energy. We recommend using only one or two programs you really need at the same time.
  14. Use More RAM – A large capacity RAM can reduce the burden of virtual memory that directly lead to effects on the HD and also lead to spent laptop battery life.
  15. Maintain Hygiene of Your Laptop – Laptops with the dirty air ventilation will be greater heat and with that spent battery (ventilator always work). Clean the air vents on a regular basis to maintain the temperature remains the minimum. Leave the area around the ventilation as much as possible open or empty for easy air circulation.
  16. Avoid Places with High Temperature – As much as possible avoid direct sunlight or other heat places.
  17. Avoid Memory Effect – The battery type Li-Ion does not have this problem. This problem may increase for the battery type Ni-MH. This can be prevented with a charge when the battery is completely empty.
  18. Update Software and Drivers – Usually, the drivers and the latest software is designed to work more efficiently than the previous version.
  19. Use the Right Adapter – Make sure that you use to adapter for your of the original specification or match. The wrong type can lead to overload and damage the battery and your laptop.
  20. Trim Laptops – If you don’t use laptop some time, make sure that the laptop is 40% filled, pull the battery, and store in some dark place.

Sunday, July 22, 2012

Check all lists for specified column type


A simple PowerShell script that will show all columns names for the given column type along with the location and the list name

Recently we were carrying out an internal review of our SharePoint 2010 intranet system at CPS and needed to see where the BCS applications were used in the site collection on all sites. Rather than clicking on the list or library settings for each list / library to check for the External Data type on the column settings, I wrote a simple PowerShell script to check this.

The script is available from the Microsoft Script Center below:
To run the script, update the $siteCollection variable with the correct site collection URL and specify the column type, in our case this was External Data.
Once executed, the script will output the data to a text file called sitelists.txt. An example output can be seen below:
image

Friday, July 13, 2012

AJAX ModalPopup Demonstration



ASP.NET AJAX is a free framework for building a new generation of richer, more interactive, highly personalized cross-browser web applications. This new web development technology from Microsoft integrates cross-browser client script libraries with the ASP.NET 2.0 server-based development framework. In addition, ASP.NET AJAX offers you the same type of development platform for client-based web pages that ASP.NET offers for server-based pages. And because ASP.NET AJAX is an extension of ASP.NET, it is fully integrated with server-based services. ASP.NET AJAX makes it possible to easily take advantage of AJAX techniques on the web and enables you to create ASP.NET pages with a rich, responsive UI and server communication. However, AJAX isn't just for ASP.NET. You can take advantage of the rich client framework to easily build client-centric web applications that integrate with any backend data provider and run on most modern browsers.

Click here to change the paragraph style



This ModalPopup will be spawned programmatically. The ModalPopupExtender that this popup is attached to has a hidden TargetControl. The popup can be shown via server in code behind and on the client in script by calling the ModalPopupExtender methods to show and hide.



The ModalPopup supports 4 animation events that allow you to spice up its showing and hiding with visual effects. Open an animated modal popup
 ModalPopup Description
The ModalPopup extender allows a page to display content to the user in a "modal" manner which prevents the user from interacting with the rest of the page. The modal content can be any hierarchy of controls and is displayed above a background that can have a custom style applied to it.

When displayed, only the modal content can be interacted with; clicking on the rest of the page does nothing. When the user is done interacting with the modal content, a click of an OK/Cancel control dismisses the modal content and optionally runs custom script. The custom script will typically be used to apply whatever changes were made while the modal mode was active. If a postback is required, simply allow the OK/Cancel control to postback and the page to re-render. You can also absolutely position a modal popup by setting the X and Y properties. By default it is centered on the page, however if just X or Y is specified then it is centered vertically or horizontally.

You can provide OnShowing/OnShown/OnHiding/OnHidden animations which are played when the modal content is shown and hidden. For example, you can use these animations to fade-in and fade-out modal content.
 ModalPopup Properties
The control above is initialized with this code. The display on the modal popup element is set to none to avoid a flicker on render. The italic properties are optional:
<ajaxToolkit:ModalPopupExtender ID="MPE" runat="server"
    TargetControlID="LinkButton1"
    PopupControlID="Panel1"
    BackgroundCssClass="modalBackground" 
    DropShadow="true" 
    OkControlID="OkButton" 
    OnOkScript="onOk()"
    CancelControlID="CancelButton" 
    PopupDragHandleControlID="Panel3" >
        <Animations>
            <OnShowing> ..  </OnShowing>
            <OnShown>   ..  </OnShown>    
            <OnHiding>  ..  </OnHiding>            
            <OnHidden>  ..  </OnHidden>            
        </Animations>
    </ajaxToolkit:ModalPopupExtender>
    
  • TargetControlID - The ID of the element that activates the modal popup
  • PopupControlID - The ID of the element to display as a modal popup
  • BackgroundCssClass - The CSS class to apply to the background when the modal popup is displayed
  • DropShadow - True to automatically add a drop-shadow to the modal popup
  • OkControlID - The ID of the element that dismisses the modal popup
  • OnOkScript - Script to run when the modal popup is dismissed with the OkControlID
  • CancelControlID - The ID of the element that cancels the modal popup
  • OnCancelScript - Script to run when the modal popup is dismissed with the CancelControlID
  • PopupDragHandleControlID - The ID of the embedded element that contains the popup header/title which will be used as a drag handle
  • X - The X coordinate of the top/left corner of the modal popup (the popup will be centered horizontally if not specified)
  • Y - The Y coordinate of the top/left corner of the modal popup (the popup will be centered vertically if not specified)
  • RepositionMode - The setting that determines if the popup needs to be repositioned when the window is resized or scrolled.

Thursday, July 12, 2012

Using jQuery with SharePoint 2010

jQuery is an open source JavaScript library that helps you build rich, dynamic, client-side applications. The power in jQuery comes from its simplicity and powerful query syntax. One of jQuery’s most powerful abilities is to quickly select various HTML DOM elements. Once you find the element or collection of elements, jQuery makes it easy to modify attributes and CSS for those elements. jQuery also supports extensibility through a rich plug-in model. In fact, a huge community of jQuery plug-ins is available. It is actually a core design point of jQuery to keep the core library small and provide most of the rich functionality via plug-ins. Although it is not possible to cover all aspects of jQuery in this chapter, there is one very important jQuery API with which SharePoint developers and designers should become familiar: the Ajax library. You learned about calling SharePoint from the client using the Client Object Model earlier in this chapter, but the Client Object Model doesn’t cover all SharePoint functionality. For example, Search is not covered by the Client Object Model and many others. The Client Object Model covers only APIs in theMicrosoft.SharePoint.dll. This is where the jQuery Ajax library comes into play. Fortunately, SharePoint covers almost all its functionality with SOAP-based .asmx web services. The Ajax library makes it relatively easy to call these web services using jQuery from the client.
In this section, you will see how to call SharePoint web services using jQuery and dynamically display the results in a Content Editor Web Part (CEWP), without writing any server code.

Loading jQuery

You can download the jQuery library from the jQuery homepage. The current version as of thiswriting is 1.4.2. The jQuery library is a single fi le called jquery-1.4.2.js. There are actually two versions of this file.
  • jquery-1.4.2.js — A human-readable source version.
  • jquery-1.4.2.min.js — A minified and condensed version
I recommend using the source version for development and the minified version in production. Download the jquery-1.4.2.js file and put it in somewhere on your SharePoint site. Create a Scripts folder under the SiteAssets library to hold your JavaScript files. The path would be something similar to http://intranet.contoso.com/SiteAssets/Scripts/jquery-1.4.2.js.
To add the jQuery library, use the following script tag on your page.
<script src="/SiteAssets/Scripts/jquery-1.4.2.js" type="text/javascript"></script>
Another option is to use the jQuery library hosted on Microsoft’s content delivery network (CDN). The CDN geographically distributes the fi le around the world, making it faster for clients to download the file. With SharePoint on-premise installations, such as your intranet, this is not as important, but with SharePoint Online or SharePoint-based Internet sites, this will increase the perceived performance of your site. Add the following script tag to your page to use the Microsoft CDN to load the jQuery library.
<script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js"type="text/javascript"></script>
Ajax script loader
One thing that you need to be concerned with when using jQuery is that the jQuery library is loaded only once. There are a number of ways that you could do this, but this section mentions three ways and the various caveats associated with each method.
The first method is to just include the script tags, like you saw previously, directly to the page or, even better, to the master page. You would need to ensure that no other components also add a reference to the jQuery library. Here, the term "components" refer to anything that may inject code when the page renders, such as Web Parts. This is an acceptable approach if you control the entire page, but many times this is not possible due to the modular nature of SharePoint development.
The next approach is to use the ScriptLink control. The ScriptLink control ensures that the script is loaded only once and will also ensure that other dependencies have been loaded first. Add the following ScriptLink server-side tag to your page to load the jQuery library.
<SharePoint:ScriptLink ID= "SPScriptLink "
  runat= "server" Defer= "false "
  Localizable= "false " Name= "jquery-1.4.2.js ">
</SharePoint:ScriptLink>
The ScriptLink control requires that you put the jQuery library file in the LAYOUTS directory, C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS. This may not be possible if you have limited rights to the server, such as when you are creating sandboxed solutions. Also, even if the JavaScript library is in the LAYOUTS folder, theScriptLink control is not allowed to run as a sandboxed solution. Therefore, I do not recommend this approach.
The third method, and the one that you should use, is to load jQuery using the Microsoft Ajax script loader, or another client-side script loader. One thing to be aware of is that the Microsoft ASP.NET Ajax library is now included as part of the Ajax Control Toolkit. This means that the ASP.NET Ajax library was split into server controls, which are now in the Ajax Control Toolkit, and client code, which is now done using jQuery. So, most of the functionality that was provided is now done in jQuery or through a jQuery plug-in, except the script loader. The Ajax library script loader has not been released yet for jQuery, so you will need to use the existing Start.js script loader library until it is released.
Download the Start.js library to your Site Assets library’s Script folder that you created earlier to hold your scripts. You can find the current script loader on Microsoft’s CDN at the following URL.
http://ajax .microsoft.com/ajax/beta/0910/Start.js.
You should also download the source version for development from the following URL.
Alternatively, you could load the Start.js library directly from the Microsoft CDN.
There are two steps to loading the jQuery library, or any of your custom JavaScript libraries. First, reference the script loader on your page using the following script tag.
<script src= "/SiteAssets/Scripts/Start.debug.js " type= "text/javascript "></script>
Or, if you are loading the library from the CDN, use the following script tag instead.
<script src= "http://ajax.microsoft.com/ajax/beta/0911/Start.js "type= "text/javascript"></script>
The second step is to reference the jQuery library or your own libraries using the Sys.loadScripts method, which is part of the Start.js library. The Sys.loadScripts method takes an array of scripts to load and a callback function to call when they have been loaded. Add the follow code to load the jQuery library.
<script type= "text/javascript ">
  Sys.loadScripts(["/SiteAssets/Scripts/jquery-1.4.2.js "], function() {
    alert("jQuery Loaded ");
  });
</script>
The Ajax Script Loader prevents the jQuery library from being loaded multiple times on the same page, even if you add many Web Parts that are using this code.

Calling SharePoint web services with jQuery

You have seen how to get SharePoint list data using the Client Object Model, but there are many types of SharePoint data that are not covered by the Client Object Model. The Client Object Model applies only to data in the Microsoft.SharePoint.dll, essentially functionality found in SharePoint Foundation only. To leverage other SharePoint data, such as profile data or search data, you will need to call the SharePoint web services. Calling these web services from the client using JavaScript has become much easier using the jQuery Ajax API. Let’s first take a quick look at how to retrieve list data, in this case the Announcements list, using jQuery. You could do this using the Client Object Model, but this example should serve as a bridge from doing it with the Client Object Model to doing it with jQuery.
jQuery in the Content Editor web part
To keep things simple and demonstrate another technique for using JavaScript on your pages, you will use the Content Editor Web Part (CEWP) to display a list of announcements. This example does not require Visual Studio; everything can be done using only a web browser.

To display a list of announcements by using JavaScript

  1. Start by adding a CEWP to the right column of your home page. You can find the CEWP in the Web Part gallery under theMedia and Content category.
  2. Put the Web Part into edit mode by selecting Edit Web Part from the Web Part’s context menu. Click the link in the Web Part titled Click here to add new content.
  3. Next, edit the source HTML for the Web Part. Click the Editing Tools context-sensitive Format Text tab on the ribbon. In theMarkup Ribbon group, select Edit HTML source from the HTML drop-down button. In the HTML source dialog, add the following code.
    <!--Load the Script Loader-->
    <script src= "/SiteAssets/Scripts/Start.debug.js" type= "text/javascript"></script>
    
    <!-- Load jQuery library-->
    <script type= "text/javascript">
      Sys.loadScripts(["/SiteAssets/Scripts/jquery-1.4.2.js"], function() {
      GetAnnouncements();
      });
    </script>
    <script type= "text/javascript">
      function GetAnnouncements() {
        var soapEnv = "<soap:Envelope
    xmlns:soap=’http://schemas.xmlsoap.org/soap/envelope/’> \
          <soap:Body> \
          <GetListItems xmlns=’http://schemas.microsoft.com/sharepoint/soap/’> \
            <listName>Announcements</listName> \
              <viewFields> \
                <ViewFields> \
                  <FieldRef Name=’Title’ /> \
                  <FieldRef Name=’Body’ /> \
                  <FieldRef Name=’Expires’ /> \
                </ViewFields> \
              </viewFields> \
            </GetListItems> \
          </soap:Body> \
        </soap:Envelope>";
      jQuery.ajax({
        url: "/_vti_bin/lists.asmx",
        type: "POST",
        dataType: "xml",
        data: soapEnv,
        complete: GetListItemsComplete,
        contentType: "text/xml; charset=\"utf-8\""
      });
    }
    function GetListItemsComplete(xData, status) {
      jQuery(xData.responseXML).find("z\\:row").each(function () {
        $("<li>" + $(this).attr("ows_Title") + "</li>").appendTo("#Announcements");
      });
    }
    </script>
    <ul id="Announcements"></ul>
    
The GetAnnouncements function builds the SOAP message and then uses the jQuery.ajax API to call the lists.asmx web service. The jQuery.ajax calls the GetListItemsCompleted callback method when the web service returns. The GetListItemsCompletemethod parses the XML data that returns from the lists.asmx web service. As it parses each record in the XML data, it appends a list item to the Announcements list using the appendTo function.
There are two key pieces to calling various SharePoint web services. The first is to understand the exact SOAP XML that is required to call the service, and the second is to understand the returned XML data and how to parse it to extract the exact values required. Although these change between the various services, the code pattern is the same for all services. Unfortunately, discovering how to format the SOAP message can be a challenge. Although MSDN documents the methods, it does not tell you the exact SOAP format or which parameters are optional. One of the easiest ways to discover the syntax is to create a console application in Visual Studio that calls the web service you are interested in calling from JavaScript. Then use the web debugging tool Fiddler to intercept and inspect the web service calls.

Tuesday, July 10, 2012

Insert JavaScript into a Content Editor Web Part (CEWP)


n the 2007 version of SharePoint, we had the Source Editor included in the Content Editor Web Part (CEWP) as our way of inputting JavaScript directly onto a page. The process on how to do this has changed a little bit in the new 2010 version. Follow below on how to successfully perform the same task.
Instead of having a Source Editor to directly paste in our code, we need to first create a simple text file and save our code there. Once saved, upload that file to SharePoint. I am using the Site Assets library for this demo and a JavaScript that displays today’s date. After you have uploaded the file, right click on the file and select Copy Shortcut.
Now, go to your desired page and put the page in edit mode. Add in the Content Editor Web Part located under the Media and Content category. Once added select the web part and use the ribbon UI to navigate to the Web Part Properties screen.
On the web part properties screen, paste in the URL link to your uploaded JavaScript file. Click OK on the properties screen and, if your JavaScript is valid, it should display the desired results! Simple and easy!
    

Sunday, July 8, 2012

10 Tips for Writing High-Performance Web Applications

10 Tips for Writing High-Performance Web Applications


This article uses the following technologies:ASP.NET, .NET Framework, IIS

Original Article of Rob Howard

Performance on the Data Tier
When it comes to performance-tuning an application, there is a single litmus test you can use to prioritize work: does the code access the database? If so, how often? Note that the same test could be applied for code that uses Web services or remoting, too, but I'm not covering those in this article.
If you have a database request required in a particular code path and you see other areas such as string manipulations that you want to optimize first, stop and perform your litmus test. Unless you have an egregious performance problem, your time would be better utilized trying to optimize the time spent in and connected to the database, the amount of data returned, and how often you make round-trips to and from the database.
With that general information established, let's look at ten tips that can help your application perform better. I'll begin with the changes that can make the biggest difference.

Tip 1—Return Multiple Resultsets
Review your database code to see if you have request paths that go to the database more than once. Each of those round-trips decreases the number of requests per second your application can serve. By returning multiple resultsets in a single database request, you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.
While you can return multiple resultsets using dynamic SQL, I prefer to use stored procedures. It's arguable whether business logic should reside in a stored procedure, but I think that if logic in a stored procedure can constrain the data returned (reduce the size of the dataset, time spent on the network, and not having to filter the data in the logic tier), it's a good thing.
Using a SqlCommand instance and its ExecuteReader method to populate strongly typed business classes, you can move the resultset pointer forward by calling NextResult. Figure 1 shows a sample conversation populating several ArrayLists with typed classes. Returning only the data you need from the database will additionally decrease memory allocations on your server.

// read the first resultset reader = command.ExecuteReader(); // read the data from that resultset while (reader.Read()) { suppliers.Add(PopulateSupplierFromIDataReader( reader )); } // read the next resultset reader.NextResult(); // read the data from that second resultset while (reader.Read()) { products.Add(PopulateProductFromIDataReader( reader )); }
Tip 2—Paged Data Access
The ASP.NET DataGrid exposes a wonderful capability: data paging support. When paging is enabled in the DataGrid, a fixed number of records is shown at a time. Additionally, paging UI is also shown at the bottom of the DataGrid for navigating through the records. The paging UI allows you to navigate backwards and forwards through displayed data, displaying a fixed number of records at a time.
There's one slight wrinkle. Paging with the DataGrid requires all of the data to be bound to the grid. For example, your data layer will need to return all of the data and then the DataGrid will filter all the displayed records based on the current page. If 100,000 records are returned when you're paging through the DataGrid, 99,975 records would be discarded on each request (assuming a page size of 25). As the number of records grows, the performance of the application will suffer as more and more data must be sent on each request.
One good approach to writing better paging code is to use stored procedures. Figure 2 shows a sample stored procedure that pages through the Orders table in the Northwind database. In a nutshell, all you're doing here is passing in the page index and the page size. The appropriate resultset is calculated and then returned.

CREATE PROCEDURE northwind_OrdersPaged ( @PageIndex int, @PageSize int ) AS BEGIN DECLARE @PageLowerBound int DECLARE @PageUpperBound int DECLARE @RowsToReturn int -- First set the rowcount SET @RowsToReturn = @PageSize * (@PageIndex + 1) SET ROWCOUNT @RowsToReturn -- Set the page bounds SET @PageLowerBound = @PageSize * @PageIndex SET @PageUpperBound = @PageLowerBound + @PageSize + 1 -- Create a temp table to store the select results CREATE TABLE #PageIndex ( IndexId int IDENTITY (1, 1) NOT NULL, OrderID int ) -- Insert into the temp table INSERT INTO #PageIndex (OrderID) SELECT OrderID FROM Orders ORDER BY OrderID DESC -- Return total count SELECT COUNT(OrderID) FROM Orders -- Return paged results SELECT O.* FROM Orders O, #PageIndex PageIndex WHERE O.OrderID = PageIndex.OrderID AND PageIndex.IndexID > @PageLowerBound AND PageIndex.IndexID < @PageUpperBound ORDER BY PageIndex.IndexID END
In Community Server, we wrote a paging server control to do all the data paging. You'll see that I am using the ideas discussed in Tip 1, returning two resultsets from one stored procedure: the total number of records and the requested data.
The total number of records returned can vary depending on the query being executed. For example, a WHERE clause can be used to constrain the data returned. The total number of records to be returned must be known in order to calculate the total pages to be displayed in the paging UI. For example, if there are 1,000,000 total records and a WHERE clause is used that filters this to 1,000 records, the paging logic needs to be aware of the total number of records to properly render the paging UI.

Tip 3—Connection Pooling
Setting up the TCP connection between your Web application and SQL Server can be an expensive operation. Developers at Microsoft have been able to take advantage of connection pooling for some time now, allowing them to reuse connections to the database. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection.
Of course you need to watch out for leaking connections. Always close your connections when you're finished with them. I repeat: no matter what anyone says about garbage collection within the Microsoft®.NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen.
To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling.
Whenever your application is connecting to a resource, such as a database, running in another process, you should optimize by focusing on the time spent connecting to the resource, the time spent sending or retrieving data, and the number of round-trips. Optimizing any kind of process hop in your application is the first place to start to achieve better performance.
The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. For example, in Community Server, this is where you populate a Forums or Threads collection, and apply business rules such as permissions; most importantly it is where the Caching logic is performed.

Tip 4—ASP.NET Cache API
One of the very first things you should do before writing a line of application code is architect the application tier to maximize and exploit the ASP.NET Cache feature.
If your components are running within an ASP.NET application, you simply need to include a reference to System.Web.dll in your application project. When you need access to the Cache, use the HttpRuntime.Cache property (the same object is also accessible through Page.Cache and HttpContext.Cache).
There are several rules for caching data. First, if data can be used more than once it's a good candidate for caching. Second, if data is general rather than specific to a given request or user, it's a great candidate for the cache. If the data is user- or request-specific, but is long lived, it can still be cached, but may not be used as frequently. Third, an often overlooked rule is that sometimes you can cache too much. Generally on an x86 machine, you want to run a process with no higher than 800MB of private bytes in order to reduce the chance of an out-of-memory error. Therefore, caching should be bounded. In other words, you may be able to reuse a result of a computation, but if that computation takes 10 parameters, you might attempt to cache on 10 permutations, which will likely get you into trouble. One of the most common support calls for ASP.NET is out-of-memory errors caused by overcaching, especially of large datasets.Common Performance Myths
One of the most common myths is that C# code is faster than Visual Basic code. There is a grain of truth in this, as it is possible to take several performance-hindering actions in Visual Basic that are not possible to accomplish in C#, such as not explicitly declaring types. But if good programming practices are followed, there is no reason why Visual Basic and C# code cannot execute with nearly identical performance. To put it more succinctly, similar code produces similar results.
Another myth is that codebehind is faster than inline, which is absolutely false. It doesn't matter where your code for your ASP.NET application lives, whether in a codebehind file or inline with the ASP.NET page. Sometimes I prefer to use inline code as changes don't incur the same update costs as codebehind. For example, with codebehind you have to update the entire codebehind DLL, which can be a scary proposition.
Myth number three is that components are faster than pages. This was true in Classic ASP when compiled COM servers were much faster than VBScript. With ASP.NET, however, both pages and components are classes. Whether your code is inline in a page, within a codebehind, or in a separate component makes little performance difference. Organizationally, it is better to group functionality logically this way, but again it makes no difference with regard to performance.
The final myth I want to dispel is that every functionality that you want to occur between two apps should be implemented as a Web service. Web services should be used to connect disparate systems or to provide remote access to system functionality or behaviors. They should not be used internally to connect two similar systems. While easy to use, there are much better alternatives. The worst thing you can do is use Web services for communicating between ASP and ASP.NET applications running on the same server, which I've witnessed all too frequently.

Figure 3 ASP.NET Cache 
There are a several great features of the Cache that you need to know. The first is that the Cache implements a least-recently-used algorithm, allowing ASP.NET to force a Cache purge—automatically removing unused items from the Cache—if memory is running low. Secondly, the Cache supports expiration dependencies that can force invalidation. These include time, key, and file. Time is often used, but with ASP.NET 2.0 a new and more powerful invalidation type is being introduced: database cache invalidation. This refers to the automatic removal of entries in the cache when data in the database changes. For more information on database cache invalidation, see Dino Esposito's Cutting Edge column in the July 2004 issue of MSDN®Magazine. For a look at the architecture of the cache, see Figure 3.

Tip 5—Per-Request Caching
Earlier in the article, I mentioned that small improvements to frequently traversed code paths can lead to big, overall performance gains. One of my absolute favorites of these is something I've termed per-request caching.
Whereas the Cache API is designed to cache data for a long period or until some condition is met, per-request caching simply means caching the data for the duration of the request. A particular code path is accessed frequently on each request but the data only needs to be fetched, applied, modified, or updated once. This sounds fairly theoretical, so let's consider a concrete example.
In the Forums application of Community Server, each server control used on a page requires personalization data to determine which skin to use, the style sheet to use, as well as other personalization data. Some of this data can be cached for a long period of time, but some data, such as the skin to use for the controls, is fetched once on each request and reused multiple times during the execution of the request.
To accomplish per-request caching, use the ASP.NET HttpContext. An instance of HttpContext is created with every request and is accessible anywhere during that request from the HttpContext.Current property. The HttpContext class has a special Items collection property; objects and data added to this Items collection are cached only for the duration of the request. Just as you can use the Cache to store frequently accessed data, you can use HttpContext.Items to store data that you'll use only on a per-request basis. The logic behind this is simple: data is added to the HttpContext.Items collection when it doesn't exist, and on subsequent lookups the data found in HttpContext.Items is simply returned.

Tip 6—Background Processing
The path through your code should be as fast as possible, right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. Sending out e-mails or parsing and validation of incoming data are just a few examples.
When tearing apart ASP.NET Forums 1.0 and rebuilding what became Community Server, we found that the code path for adding a new post was pretty slow. Each time a post was added, the application first needed to ensure that there were no duplicate posts, then it had to parse the post using a "badword" filter, parse the post for emoticons, tokenize and index the post, add the post to the moderation queue when required, validate attachments, and finally, once posted, send e-mail notifications out to any subscribers. Clearly, that's a lot of work.
It turns out that most of the time was spent in the indexing logic and sending e-mails. Indexing a post was a time-consuming operation, and it turned out that the built-in System.Web.Mail functionality would connect to an SMTP server and send the e-mails serially. As the number of subscribers to a particular post or topic area increased, it would take longer and longer to perform the AddPost function.
Indexing e-mail didn't need to happen on each request. Ideally, we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five minutes. We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio®2005.
The Timer class, found in the System.Threading namespace, is a wonderfully useful, but less well-known class in the .NET Framework, at least for Web developers. Once created, the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. This means you can set up code to execute without an incoming request to your ASP.NET application, an ideal situation for background processing. You can do work such as indexing or sending e-mail in this background process too.
There are a couple of problems with this technique, though. If your application domain unloads, the timer instance will stop firing its events. In addition, since the CLR has a hard gate on the number of threads per process, you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. ASP.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. However, if you have lots of asynchronous work, this can be an issue.
There is not enough room to go into the code here, but you can download a digestible sample atwww.rob-howard.net. Just grab the slides and demos from the Blackbelt TechEd 2004 presentation.

Tip 7—Page Output Caching and Proxy Servers
ASP.NET is your presentation layer (or should be); it consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content that they generate. If you have an ASP.NET page that generates output, whether HTML, XML, images, or any other data, and you run this code on each request and it generates the same output, you have a great candidate for page output caching.
By simply adding this line to the top of your page
<%@ Page OutputCache VaryByParams="none" Duration="60" %>
you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds, at which point the page will re-execute and the output will once be again added to the ASP.NET Cache. This behavior can also be accomplished using some lower-level programmatic APIs, too. There are several configurable settings for output caching, such as the VaryByParams attribute just described. VaryByParams just happens to be required, but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. For example, default.aspx?Report=1 or default.aspx?Report=2 could be output-cached by simply setting VaryByParam="Report". Additional parameters can be named by specifying a semicolon-separated list.
Many people don't realize that when the Output Cache is used, the ASP.NET page also generates a set of HTTP headers that downstream caching servers, such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. When HTTP Cache headers are set, the documents can be cached on these network resources, and client requests can be satisfied without having to go back to the origin server.
Using page output caching, then, does not make your application more efficient, but it can potentially reduce the load on your server as downstream caching technology caches documents. Of course, this can only be anonymous content; once it's downstream, you won't see the requests anymore and can't perform authentication to prevent access to it.

Tip 8—Run IIS 6.0 (If Only for Kernel Caching)
If you're not running IIS 6.0 (Windows Server 2003), you're missing out on some great performance enhancements in the Microsoft Web server. In Tip 7, I talked about output caching. In IIS 5.0, a request comes through IIS and then to ASP.NET. When caching is involved, an HttpModule in ASP.NET receives the request, and returns the contents from the Cache.
If you're using IIS 6.0, there is a nice little feature called kernel caching that doesn't require any code changes to ASP.NET. When a request is output-cached by ASP.NET, the IIS kernel cache receives a copy of the cached data. When a request comes from the network driver, a kernel-level driver (no context switch to user mode) receives the request, and if cached, flushes the cached data to the response, and completes execution. This means that when you use kernel-mode caching with IIS and ASP.NET output caching, you'll see unbelievable performance results. At one point during the Visual Studio 2005 development of ASP.NET, I was the program manager responsible for ASP.NET performance. The developers did the magic, but I saw all the reports on a daily basis. The kernel mode caching results were always the most interesting. The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization. It was amazing! There are certainly other reasons for using IIS 6.0, but kernel mode caching is an obvious one.

Tip 9—Use Gzip Compression
While not necessarily a server performance tip (since you might see CPU utilization go up), using gzip compression can decrease the number of bytes sent by your server. This gives the perception of faster pages and also cuts down on bandwidth usage. Depending on the data sent, how well it can be compressed, and whether the client browsers support it (IIS will only send gzip compressed content to clients that support gzip compression, such as Internet Explorer 6.0 and Firefox), your server can serve more requests per second. In fact, just about any time you can decrease the amount of data returned, you will increase requests per second.
The good news is that gzip compression is built into IIS 6.0 and is much better than the gzip compression used in IIS 5.0. Unfortunately, when attempting to turn on gzip compression in IIS 6.0, you may not be able to locate the setting on the properties dialog in IIS. The IIS team built awesome gzip capabilities into the server, but neglected to include an administrative UI for enabling it. To enable gzip compression, you have to spelunk into the innards of the XML configuration settings of IIS 6.0 (which isn't for the faint of heart). By the way, the credit goes to Scott Forsyth of OrcsWeb who helped me figure this out for thewww.asp.net severs hosted by OrcsWeb.
Rather than include the procedure in this article, just read the article by Brad Wilson at IIS6 Compression. There's also a Knowledge Base article on enabling compression for ASPX, available at Enable ASPX Compression in IIS. It should be noted, however, that dynamic compression and kernel caching are mutually exclusive on IIS 6.0 due to some implementation details.

Tip 10—Server Control View State
View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
There are a number of drawbacks to the use of view state, however. First of all, it increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. Lastly, view state increases the memory allocations on the server.
Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed. The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, you simply set the EnableViewState property to false, or you can set it globally within the page using this setting:
<%@ Page EnableViewState="false" %>
If you are not doing postbacks in a page or are always regenerating the controls on a page on each request, you should disable view state at the page level.

Conclusion
I've offered you some tips that I've found useful for writing high-performance ASP.NET applications. As I mentioned at the beginning of this article, this is more a preliminary guide than the last word on ASP.NET performance. (More information on improving the performance of ASP.NET apps can be found atImproving ASP.NET Performance.) Only through your own experience can you find the best way to solve your unique performance problems. However, during your journey, these tips should provide you with good guidance. In software development, there are very few absolutes; every application is unique.