Archive | Performance RSS for this section

Enable Developer Dashboard

In order to enable the Developer Dashboard you can use the following powershell script:


[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")

$devboard = [Microsoft.SharePoint.Administration.SPWebService]::
ContentService.DeveloperDashboardSettings

$devboard.DisplayLevel = "On"

$devboard.RequiredPermissions = [Microsoft.SharePoint.SPBasePermissions]::FullMask

$devboard.TraceEnabled=$true

$devboard.Update()

You can change the third line depending on if you want the dashboard to be always on, off or on demand. If it is on demand the user has to press the developer dashboard icon in the top right corner to enable the dashboard for that given page.


$devboard.DisplayLevel = "On"

$devboard.DisplayLevel = "Off"

$devboard.DisplayLevel = "OnDemand"

In addition to just enabling the developer dashboard you can also customize what permission level a user has to have to be able to use the dashboard. The value used here ensures that it will only be available for administrators


$devboard.RequiredPermissions = [Microsoft.SharePoint.SPBasePermissions]::FullMask

So go out, use the developer dashboard and make sure you gain control of your environment 🙂

Advertisements

WFE Caching (BLOB)

Blob Cache (Part I)

In SharePoint Server 2010 you can enable BLOB cache. BLOB cache is a disk-based caching which will increase browser performance and reduce database loads. SharePoint will load the files directly from its harddrive rather than performing an expensive SQL query to retrieve the data from the SQL server.

 When a site is rendered the first time all files associated with that site will be copied to the local harddrive and stored in a cache. All request for this website after the initial request will be retrieved directly from the harddrive instead of the SQL server.

Configuration

In order to enable the BLOB cache for a web application we need to do the following:

  • Open the web.config file for the web application on which you want to enable BLOB caching
  • Find the line starting with “<BlobCache location=”

The line will look something like this :

<BlobCache location=”C:BlobCache” path=”.(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|
mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$” maxSize=”10″ enabled=”true” />

  • First we need to ensure is that enabled is set to true  (enabled=”true”). This enables the functionality.
  • Then we need to define where to store the cache which is determined by the location parameter ( location=”C:BlobCache”) in this example the cache will be stored at on the C: partition. It is generally not recomended to store the blob cache on the c: partition where Windows is installed. If possible the cache should be put on a SSD or 15000rpm disk.
  • We then set the maximum size for the BLOB cache with the parameter maxSize  (maxSize=”10″ ). The default size is 10 GB
  • Lastly we can alter the file extensions which are to be stored in the cache by adding their exstensions to the path variable (path=”.(gif|jpg|…)

Authentication Performance

Authentication performance

If your IIS has to service a great number of request simultaniously and the users are getting a slow rendering experience it could be because of the authentication method that you are using.
NTLM is good for small and medium sized sites, Kerberos is more useful for site with a higher workload where a large number of requests are processed simultanously.

In NTLM the requests are not cached on the client machine meaning that every request needs to connect to the domain controller every time an object is accessed. This can prove to be a real stain on the overall performance if too many requests need to be made (meaning a lot of users are connecting)

Kerberos on the other hand can cache its requests meaning that the users only needs to communicated with the domain controller once, after that all subsequential requests are made using the cached credentials. This can improve the performance of the site dramatically.

Bit Rate Throttling

Bit Rate Throttling

In IIS any media file that is being sent to a client via HTTP will be sent with the maximum amount of bandwith. This is however not nessesary as a user can only watch/listen to the media at a certain phase. This then leads to more data being sent then nessesary. If the user decides to not watch/listen to the full media file, the IIS server has sent out data which could have been saved (and thus saving bandwith).

Bit Rate Throttling enables the IIS to meassure the bit rate of the media to determine how much bandwith is needed for the user to be able to enjoy the media file without any interuption while at the same time conserving bandwith.

Bit Rate Throttling is an extension to IIS 7.0 and it can be customized to support bit rate throttling for a number of different formats. IT can be downloaded from http://www.iis.net/download/BitRateThrottling.

Blob Storage

When the BLOB cache is also enabled, Bit Rate Throttling uses extension rules for files cached to disk. Files that are served from the BLOB cache by using Bit Rate Throttling are sent to the client based on a percentage of the compressed size using the encoded bit rate. For example, if the videos in your organization are smaller than 10 MB, you may decide not to use Bit Rate Throttling because it will affect how fast users can download videos to their local computers. However, if you are serving video files, enable Bit Rate Throttling to control the speed at which files are downloaded to client computers.

Max Server Memory

Max Server Memory

This setting changes the amount of memory that can be used by the SQL server buffer pool (where the cached data is stored). It might seem logical to allow the server to use all the memory since it is only working as SQL server anyways, it is however important to consider the fact that the OS and other SQL components will require memory as well. It is therefore recomended to limit how much of the totally available memory the SQL server buffer pool can use.

NOTE: This value will change directly, no restart required.

The following table shows suggested values for the memory usage based on total memory available

Physical Ram

Max Server Memory Setting

4 GB

3,2 GB

6 GB

4,8 GB

8 GB

6,2 GB

16 GB

13 GB

24 GB

20,5 GB

32 GB

28 GB

48 GB

44 GB

64 GB

59 GB

Configuration

In order to set the Maximum Server Memory used for the buffer pool you can do the following

  • Start SQL Server Management Studio and connect to your database
  • Right click the top node and select properties


  • Select the Memory tab and set the Maximum server memory


  • Press OK to apply settings.

Ad-Hoc Workload Optimization

Ad-hoc workload optimization

This is a new setting which was added in SQL Server 2008 and is used to control the memory used by a an ad-hoc query. It allows the server to only store a small part of the query in the procedure cache the first time the query is run. Meaning that an ad-hoc query will use less memory.  In previous versions of SQL Server there was an issue with large amount of memory being used by a single query. (SharePoint was one of the major causes of such queries)

This setting should be turned on the SQL server, as far as I am aware there are no negative effects of turning this on.

Configuration

In order to enable the Ad-Hoc workload optimization do the following.

  • Start SQL Server Management Studio and connect to your database
  • Right click the top node and select properties
  • Select the Advanced tab and  set the “Optimize for Ad hoc Workloads” to true
  • Press OK to apply settings.

Multiple TempDBs

The TempDB
The tempDB is one of the most vital databases in your SharePoint farm. Most SharePoint performance issues tend to be due to SQL configuration and a big part of that configuration is the TempDB. Every command you execute within your farm that has any affect on a database will first be stored in the TempDB before it is committed to its actual database.

Location
The TempDB should be placed on a separate storage location on as fast disks as possible. This could give you up to 30% or more performance increase as all of a sudden all transactions will be managed by faster hard drives, meaning that changes can be committed faster.

Number of TempDBs in your farm
You can split up your TempDB into multiple files. This will allow for the SQL server to write to multiple “partitions” simultaneously, meaning that transactions can happen faster. Each TempDB should be placed on as fast storage as possible as well, the recommendation is to separate all the TempDB files on separate storage locations, this is however not always possible and you should then focus on ensuring that the files are on the fastest storage platform you have available.

As a general recommendation you should take the amount of processor cores and dived it by 4 to get the amount of TempDBs you should have.

If we for example have 16 cores, we divide this by 4 and get 16/4 = 4 TempDB

Size of the TempDB
Another general recommendation is to set a predetermined size for the Temp DBs so that it does not need to grow when large actions are taking place. The size you should set your databases to depends on how your environment is used. It is a good idea to watch the TempDB during heavy use of your SharePoint site, to determine roughly how large they need to be.

Configuration

The following script will split up the TempDB into four 20GB files. You can modify these values based on the requirements of your farm.

ALTER DATABASE [tempdb]
MODIFY FILE ( NAME = N’tempdev’ , SIZE = 20480000KB )
ALTER DATABASE [tempdb]
ADD File ( NAME = N’tempdev_2′, FILENAME = N’G:\SQL\TempDB2\tempDB2\tempdev_2.ndf’ , SIZE = 20480000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB)
ALTER DATABASE [tempdb]
ADD File ( NAME = N’tempdev_3′, FILENAME = N’G:\SQL\TempDB3\tempDB3\tempdev_3.ndf’ , SIZE = 20480000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB)
ALTER DATABASE [tempdb]
ADD File ( NAME = N’tempdev_4′, FILENAME = N’G:\SQL\TempDB4\tempDB4\tempdev_4.ndf’ , SIZE = 20480000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB)
GO