You are all welcome to check out our new website company at: www.datasite.co.il.
it is currently in hebew – we will update to english soon.
All the best,
The feature of SQL Server 2008 that seems to get the most attention from DBA’s is the Resource Governor. It basically does what it says on the tin; for example you may want to reserve a portion of CPU or other resource for a user, process etc.
At the top level Resource Governor has Resource Pools, and there is always default resource pool
Below this you create Workload groups:
CREATE WORKLOAD GROUP groupAdhoc
CREATE WORKLOAD GROUP groupReports
CREATE WORKLOAD GROUP groupAdmin
These workload groups will be belong to the default resource pool, and for this introduction I will keep it simple, by leaving it like that. It is then a matter of assigning whatever you want to the those groups by using a function like this:
CREATE FUNCTION rgclassifier_v1() RETURNS SYSNAME
DECLARE @grp_name AS SYSNAME
IF (SUSER_NAME() = ‘sa’)
SET @grp_name = ‘groupAdmin’
IF (APP_NAME() LIKE ‘%MANAGEMENT STUDIO%’)
OR (APP_NAME() LIKE ‘%QUERY ANALYZER%’)
SET @grp_name = ‘groupAdhoc’
IF (APP_NAME() LIKE ‘%REPORT SERVER%’)
SET @grp_name = ‘groupReports’
Notice that you can use any rule you like to create an association with a workload group e.g. users or the application.
This function is then needs to be applied to the resource governor like this:
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.rgclassifier_v1)
You are now ready to configure which workload group gets which resources with this syntax:
ALTER WORKLOAD GROUP groupAdhoc
WITH (REQUEST_MAX_CPU_TIME_SEC = 30)
WITH (MAX_CPU_PERCENT = 50)
Finally the changes need to be applied to the resource governor process running in memory
ALTER RESOURCE GOVERNOR RECONFIGURE
Changes can be applied to the Resource Governor at will and take effect immediately. The function can be changed to move objects in to different workload groups as required and all of this will affect processes already running.
SSIS Logging and monitoring
- SQL Server Integration Services includes log providers that you can use to implement logging in packages, containers, and tasks. With logging, you can capture run-time information about a package, helping you audit and troubleshoot a package every time it is run. For example, a log can capture the name of the operator who ran the package and the time the package began and finished.
When you add the log to a package, you choose the log provider and the location of the log. The log provider specifies the format for the log data: for example, a SQL Server database or text file.
MSDN link explanation you can find here:
- Here you can find how to enable logging in the package (I have also explained Daniel how to do it):
- Integration Services includes logging features that write log entries when run-time events occur and can also write custom messages.
Integration Services supports a diverse set of log providers, and gives you the ability to create custom log providers. The Integration Services log providers can write log entries to text files, SQL Server Profiler, SQL Server, Windows Event Log, or XML files.
Here you can find how to configure your SSIS package:
- SQL Server Integration Services provide a set of performance counters. Among them the following few are helpful when you tune or debug your package:
- Buffers in use
- Flat buffers in use
- Private buffers in use
- Buffers spooled
- Rows read
- Rows written
“Buffers in use”, “Flat buffers in use” and “Private buffers in use” are useful to discover leaks. During package execution time, you will see these counters fluctuating. But once the package finishes execution, their values should return to the same value as what they were before the execution. Otherwise, buffers are leaked. In occasions like that, please contact Microsoft PSS.
“Buffers spooled” has an initial value of 0. When it goes above 0, it indicates that the engine has started memory swapping.
“Rows read” and “Rows written” show how many rows the entire Data Flow has processed. They give you an overall idea about the execution progress.
- Here is a link for top 10 SSIS best practices:
- Here is some code you can use in order to store the logging to Audit table:
CREATE TABLE AuditPackage (Id INT IDENTITY(1, 1) NOT NULL PRIMARY KEY,
PackageName VARCHAR(100) NOT NULL,
CREATE PROCEDURE dbo.sp_dts_addlogentry @event sysname,
INSERT INTO sysdtslog90 (event, computer, operator, source, sourceid, executionid, starttime, endtime, datacode, databytes, message)
VALUES (@event, @computer, @operator, @source, @sourceid, @executionid, @starttime, @endtime, @datacode, @databytes, @message);
INSERT INTO AuditPackage (PackageName, PackageGuid, ExecutionGuid, StartTime, ElapsedTime)
SELECT @source, @sourceid, @executionid, GETDATE(), 0
WHERE (@event = ‘PackageStart’);
SET EndTime = GETDATE(),
ElapsedTime = DATEDIFF(ms, StartTime, GETDATE()),
Status = ‘Complete’
WHERE (@event = ‘PackageEnd’
AND PackageGuid = @sourceid
AND ExecutionGuid = @executionid);
SET Status = ‘Error’
WHERE (@event = ‘OnError’
AND PackageGuid = @sourceid
AND ExecutionGuid = @executionid);
The SSIS logging gives you the ability to find bottlenecks in your SSIS package like:
- How much time it takes to validate packages
- Trace each step and find out how long it takes
- Gives you a tool you can use to log valuable events of the package.
SSIS: Custom Logging Using Event Handlers
SQL Server Integration Services (SSIS) contains some really useful logging procedures but as with most things in SSIS, it is extensible. There are 2 methods of extending the logging capability of SSIS:
- Build a custom log provider
- Use event handlers
Look at this link in order to find out which events to capture and how to customize them:
– Create the sql to kill the active database connections
declare @execSql varchar(1000), @databaseName varchar(100)
– Set the database name for which to kill the connections
set @databaseName = ‘your db name’
set @execSql = ”
select @execSql = @execSql + ‘kill ‘ + convert(char(10), spid) + ‘ ‘
where db_name(dbid) = @databaseName
DBID <> 0
spid <> @@spid
or the easy way is:
-alter database dbName set single_user with rollback immediate
and after you want to bring it back to normal status
-alter database dbName set multi_user with rollback immediate
Here are some of production DBA tasks, I could think of:
ETL data transfer
OLAP cube – in case there is a need….
Creating logical and physical objects
Establishing multiple DBEnvironment connections
Maintaining buffers for data and log pages
Managing transactions and locks
I’m sure there are more….
It does not really important what kind of a DBA are you: applicative or infrastructure one… the one thing you will always need to do is giving support and help to your clients: a client can be a market sales man, analyst, programmer or your CTO. You need high skills of support, if you want to be a professional DBA (and more…, but this one is very important also). Your sales man will come and ask you to give him all reports on sales made yesterday, and on which hour and to slice in by community; the analyst will want a report on member’s usage and the programmer will need help in T_SQL. You must know how to help all your clients, even if they come all at once, you will need to know how to say politely to each one of them when you will be available for doing the requested work. If you will answer politely with no press – they will all say it is nice to work with you and that you look professional in your line of duty. You, from the other hand, will need to do all you have promised to your clients. A good DBA, even if he is very professional, it does not look good, if he does not know how to give service to his clients. For example, if a programmer comes to you and wants your help in writing a stored procedure, and you are very busy with other things, give him a service by telling him that you have 10 minutes for him now, and you will be happy to help him later. I saw the other example, that a programmer came to a DBA to ask him for this help, and the DBA told him he does not have the time for him and he does not do these things. The first thing this programmer did is telling his colleagues not to go to the DBA and ask for help – not just this made the DBA a “bad name”, but also the company he works in damaged, which can lead also to bad performance on servers, due to the fact the programmer will upload a stored procedure to production servers with no help from the DBA, because the DBA does not want to help – it’s a pity! And wrong!
Want to be a good DBA – you must know how to give service to your clients!
I think this may come in handy…
Integrate your application in super quick time using Tweet-SQL. Tweet-SQL promises to allow fast and easy Twitter Integration through the use of CLR stored procedures on Microsoft SQL Server. A quick summary of Tweet-SQL…
create table #temp (spid int, status nvarchar(max), login nvarchar(max), hostname nvarchar(max), blkby nvarchar(max), dbname nvarchar(max), command nvarchar(max), cputime int, diskio int, lastbatch nvarchar(max), programname nvarchar(max), spid2 int, requestid int)
insert #temp exec sp_who2
select hostname, count(*)
where hostname != ‘ .’ AND login = ‘your login’
group by hostname
order by hostname
drop table #temp
This way you can filter sp_who2.