Thursday, March 23, 2017

How I take Backups

Backups -- The first thing I learned as a DBA and probably one of the most important items a DBA has to have in his repertoire.

I am currently working in a smallish shop. We are about half a terabyte total.  Previously I worked in a shop where the largest database was 10 Terabytes -- that is a different scenario requiring a different approach.

But for me, currently, I get to have a lot of 'luxuries'.. The first one I run a checkdb before I do anything.  After I tested things, I noticed that just because dbcc checkdb finding errors does not kill the step in a SQL Agent Job.

That section of the SQL Agent Job looks like this (And is step one):


dbcc checkdb with no_infomsgs, all_errormsgs
IF @@ERROR <> 0
BEGIN
 DECLARE @DB SYSNAME = DB_NAME();
 RAISERROR ('DBCC FAILED FOR %s', 23, 1, @DB) WITH LOG
END

So if anything besides a happy zero hops out of dbcc checkdb -- this raises an error which sends an email/page etc.

For the next step, I do struggle with space issues (even on my smallish environments), but I flip-flop the backups.  So the space is doubled for backups (compressed), but I am covered in the event of a backup overwriting and failing (power loss, cpu fries, Llama's attack).  I essentially used DATEPART(dw, GETDATE()) % 2 <-- this is a 0 or a 1.

Works like so (Step Two):


DECLARE @EvenOrOdd BIT, @EvenOrOddWeek BIT 
SELECT @EvenOrOddWeek = (DATEPART(ww, GETDATE()) %2)
SELECT @EvenOrOdd = DATEPART(dw, GETDATE()) % 2 ^ @EvenOrOddWeek;
DECLARE @BackupFile NVARCHAR(250);
DECLARE @DatabaseName SYSNAME = 'DatabaseName'
SELECT @BackupFile = N'G:\BACKUPS\' + @DatabaseName + '\' + @DatabaseName + 'Backup-' +
CAST(@EvenOrOdd as NVARCHAR) + '.bak';
DECLARE @BackupName NVARCHAR(250) = @DatabaseName + N'-Full Database Backup'

BACKUP DATABASE @DatabaseName TO  DISK = @BackupFile
WITH NOFORMAT, INIT,  NAME = @BackupName, SKIP, NOREWIND,
NOUNLOAD, COMPRESSION,  STATS = 10, CHECKSUM;
declare @backupSetId as int;
select @backupSetId = position from msdb..backupset where database_name= @DatabaseName
and backup_set_id=(select max(backup_set_id) from msdb..backupset where 
database_name= @DatabaseNAme)
if @backupSetId is null 
begin 
  raiserror(N'Verify failed. Backup information for database ''%s'' not found.', 
  16, 1, @DatabaseName)
end
RESTORE VERIFYONLY FROM  DISK = @BackupFile WITH  FILE = @backupSetId,  NOUNLOAD,  NOREWIND

So I always have a backup available (as I am overwriting the 'older' one with INIT).  Sunday, Tuesday, Thursday, and Saturday get 1's, Monday, Wednesday, Friday get 0's.  EvenOrOdd week flips that so that you get continuity on end of weeks.

You can achieve the same effect with Ola Hallangren's nice scripts -- regardless of whatever solution I use I want to test.  I use Ola on all non prod environments, but maintain this strategy for production.  I also thoroughly tested the solution (moved files around, opened the datafile and borked it up with a hex editor etc).  To me this is really simple (which matches the environment currently) and straight forward.  I want it that way come time to restore from a Llama attack.

Friday, March 17, 2017

Refresh processes

Usually there is some kind of process that the DBA is involved in to move data from prod and put it into test environments.  If there is data in tables that drives the environment, you may want to 'persist' that data.  In order to do that effectively, I think it is important and helpful to have a way to verify that table A and table B are identical.

Below is how I accomplish that currently:


ALTER PROCEDURE usp_CompareTables @table1 VARCHAR(100), @table2 VARCHAR(100)
AS
BEGIN
  -- return 1 if identical 0 if not
  DECLARE @sql NVARCHAR(MAX);
  DECLARE @count INT;
  DECLARE @parmDefinition NVARCHAR(200) = '@countOUT INT OUTPUT';
  set @sql = 'select @countOUT = count(*) from (
  (select * from ' + @table1 + '
  except
  select * from ' + @table2 + ' )
  union all
  (select * from ' + @table2 + '
  except
  select * from ' + @table1 +')) a'
 BEGIN TRY 
  -- try to see if the tables are identical
  exec sp_executesql @sql, @parmDefinition, @countOUT=@count OUTPUT
 END TRY
 BEGIN CATCH
  -- if we errored out then the datatypes don't match or the tables are not 
  -- alike.. Or maybe the DBA is comparing a table to a giraffe
  RETURN 0 
 END CATCH

 IF @count = 0
  -- if we have nothing in the list we are identical
  return 1
 ELSE
  -- if not we got a problem
  return 0
END 
GO

Past that you will need something to truncate and load.  If there are foreign keys, things get messy -- but I accomplish that through the following code:

IF  EXISTS (SELECT * FROM sys.objects 
WHERE object_id = OBJECT_ID(N'[dbo].[Constraints]') AND type in (N'U'))
BEGIN
 DROP TABLE Constraints
END
  create table Constraints
  (
 ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
 PKTABLE_QUALIFIER VARCHAR(50),
 PKTABLE_OWNER   VARCHAR(50),
 PKTABLE_NAME   VARCHAR(50),
 PKCOLUMN_NAME    VARCHAR(50),
 FKTABLE_QUALIFIER VARCHAR(50),
 FKTABLE_OWNER     VARCHAR(50),
 FKTABLE_NAME      VARCHAR(50),
 FKCOLUMN_NAME     VARCHAR(50),
 KEY_SEQ           INT,
 UPDATE_RULE       INT,
 DELETE_RULE       INT,
 FK_NAME           VARCHAR(256),
 PK_NAME           VARCHAR(256),
 DEFERRABILITY     INT
   )
GO
declare @database varchar(50)
declare @table varchar(50)
declare @sql   varchar(max)

-- Really could just use a while loop.. but cursors work too.
declare fk_cursor cursor for 
select  SourceDatabase, SourceTable from PersistedItems

OPEN fk_cursor
FETCH NEXT FROM fk_cursor into @database, @table



WHILE (@@FETCH_STATUS = 0)
BEGIN
 select @SQL = 'use ' + @database + '; INSERT INTO [Database].[dbo].[Constraints] exec sp_fkeys ''' + @table + ''';'
 exec (@sql)
 FETCH NEXT FROM fk_cursor into @database, @table
END

close fk_cursor
deallocate fk_cursor
GO

-- Drop Constraints
use Database
GO

declare @database varchar(50)
declare @fktablename varchar(50)
declare @fkname varchar(50)
declare @sql   varchar(max)

declare drop_cursor cursor for 
select  PKTABLE_QUALIFIER, FKTABLE_NAME, FK_NAME from Constraints

OPEN drop_cursor

FETCH NEXT FROM drop_cursor into @database, @fktablename, @fkname

WHILE (@@FETCH_STATUS = 0)
BEGIN
 select @SQL = 'use ' + @database + '; ALTER TABLE ' + QUOTENAME(@fktablename) + ' DROP CONSTRAINT ' + QUOTENAME(@fkname) + ';' from Constraints; 
 exec (@sql)
 FETCH NEXT FROM drop_cursor into @database, @fktablename, @fkname
END

close drop_cursor
deallocate drop_cursor
GO
use Database
GO

IF object_id('usp_RestoreTable') IS NULL
    EXEC ('create procedure dbo.usp_RestoreTable as select 1');
GO

ALTER PROCEDURE usp_RestoreTable
@SourceTableName NVARCHAR(100),
@PersistedTableName NVARCHAR(100),
@Database NVARCHAR (100),
@isIdentity BIT
AS
BEGIN

 DECLARE @SQL nvarchar(MAX) = ''
 DECLARE @Columns nvarchar(MAX) = ''
 CREATE TABLE #Columns
 (
  ColID INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
  ColName varchar(100)
 )


 SELECT @SQL = 'INSERT INTO #Columns SELECT COLUMN_NAME FROM [' + @Database + '].INFORMATION_SCHEMA.COLUMNS
 WHERE TABLE_NAME = ''' + @SourceTableName + ''' ORDER BY ORDINAL_POSITION'

 EXEC (@SQL)
 SELECT @SQL = ''

 SELECT @Columns += '[' + ColName + '], ' FROM #Columns
 SELECT @Columns = LEFT(@Columns, len(@Columns) - 1)

 DROP TABLE #Columns

 SELECT @SQL = 'TRUNCATE TABLE ' + @Database + '..' + @SourceTableName
 EXEC (@SQL)
 SELECT @SQL = 'USE ' + @Database + '; '
 IF (1 = @isIdentity) SELECT @SQL = @SQL + 'SET IDENTITY_INSERT ' + @SourceTableName + ' ON; ';
 SELECT @SQL = @SQL + 'INSERT INTO [dbo].[' + @SourceTableName + '] (' + @Columns + ') '
       + 'Select ' + @Columns + ' FROM [Database].[dbo].[' + @PersistedTableName + ']; '
 IF (1 = @isIdentity) SELECT @SQL = @SQL + 'SET IDENTITY_INSERT ' + @SourceTableName + ' OFF; ';

 exec (@SQL)

END
GO

--Move the data and verify
USE Database
GO
DECLARE @SourceTableName NVARCHAR(100),
@PersistedTableName NVARCHAR(100),
@Database NVARCHAR (100),
@isIdentity BIT,
@SourceTableFullName NVARCHAR(100),
@Matches BIT

DECLARE tandl_cursor CURSOR FOR
select SourceDatabase, SourceTable, PersistedTable, isIdentity from PersistedItems
OPEN tandl_cursor

FETCH NEXT FROM tandl_cursor into @Database, @SourceTableName, @PersistedTableName, @isIdentity

WHILE (@@FETCH_STATUS = 0)
BEGIN
 exec usp_RestoreTable @SourceTableName,@PersistedTableName,@Database,@isIdentity
 SELECT @SourceTableFullName = @Database + '..' + @SourceTableName
     exec @Matches = usp_CompareTables @table1 = @SourceTableFullName, @table2 = @PersistedTableName
 update PersistedItems 
 SET IsPostPersisted = @Matches 
 where SourceDatabase = @Database and SourceTable = @SourceTableName and PersistedTable = @PersistedTableName 
 FETCH NEXT FROM tandl_cursor into @Database, @SourceTableName, @PersistedTableName, @isIdentity
END

close tandl_cursor
deallocate tandl_cursor
GO

-- Create the FKeys
use Database
GO

declare @database varchar(50)
declare @fktablename varchar(50)
declare @fkname varchar(256)
declare @fkcolumnname varchar(50)
declare @pktablename varchar(50)
declare @pkcolumnname varchar (256)
declare @sql   varchar(max)

declare create_cursor cursor for 
select  PKTABLE_QUALIFIER, FKTABLE_NAME, FK_NAME, FKCOLUMN_NAME, PKTABLE_NAME, PKCOLUMN_NAME from Constraints

OPEN create_cursor

FETCH NEXT FROM create_cursor into @database, @fktablename, @fkname, @fkcolumnname, @pktablename, @pkcolumnname


WHILE (@@FETCH_STATUS = 0)
BEGIN
 select @SQL = 'USE ' + @database + '; '
    + 'ALTER TABLE ' + QUOTENAME(@fktablename) 
       + ' ADD CONSTRAINT ' + QUOTENAME(@fkname) 
    + ' FOREIGN KEY (' + QUOTENAME(@fkcolumnname) 
    + ') REFERENCES ' 
    + QUOTENAME(@pktablename) +
     '(' + QUOTENAME(@pkcolumnname) 
  + ');' FROM Constraints;
 
 exec (@sql)
 FETCH NEXT FROM create_cursor into @database, @fktablename, @fkname, @fkcolumnname, @pktablename, @pkcolumnname
END

close create_cursor
deallocate create_cursor
GO

This way I can simply add items to a driver table, and have the code handle the rest.

Wednesday, November 2, 2016

The accidental DBA

Recently chatted with a colleague where they aren't quite big enough for a full time DBA, but they definitely needed a little push in the right direction.

He asked about his TLOG (which was rather large) and his indexes (which were rather fragmented).

I pushed him toward the same toolsets I pointed out in previous posts (Ola Hollangren for backups and index maintenance  and sp_Blitz for other odds and ends).  I gave him the following as well:

 
DECLARE @DatabaseID int

SET @DatabaseID = DB_ID()

SELECT DB_NAME(@DatabaseID) AS DatabaseName,
       schemas.[name] AS SchemaName,
       objects.[name] AS ObjectName,
       indexes.[name] AS IndexName,
       objects.type_desc AS ObjectType,
       indexes.type_desc AS IndexType,
       dm_db_index_physical_stats.partition_number AS PartitionNumber,
       dm_db_index_physical_stats.page_count AS [PageCount],
       dm_db_index_physical_stats.avg_fragmentation_in_percent AS AvgFragmentationInPercent
FROM sys.dm_db_index_physical_stats (@DatabaseID, NULL, NULL, NULL, 'LIMITED') dm_db_index_physical_stats
INNER JOIN sys.indexes indexes ON dm_db_index_physical_stats.[object_id] = indexes.[object_id] AND dm_db_index_physical_stats.index_id = indexes.index_id
INNER JOIN sys.objects objects ON indexes.[object_id] = objects.[object_id]
INNER JOIN sys.schemas schemas ON objects.[schema_id] = schemas.[schema_id]
WHERE objects.[type] IN('U','V')
AND objects.is_ms_shipped = 0
AND indexes.[type] IN(1,2,3,4)
AND indexes.is_disabled = 0
AND indexes.is_hypothetical = 0
AND dm_db_index_physical_stats.alloc_unit_type_desc = 'IN_ROW_DATA'
AND dm_db_index_physical_stats.index_level = 0
AND dm_db_index_physical_stats.page_count >= 1000
order by 9 desc

Which is nothing more than a snippet out of Ola Hollangren's indexing scripts.  When he ran them and was showing indexes which were 99% fragmented, I asked him if any shrinking had been going on :).. I then instructed him not to do that ever again.

Then I pointed him here and here.  The amount of things one can do with SQL Server out of the box by just setting up a couple of rudimentary jobs is very useful.  And the fact that much of this stuff is so readily available with so many examples makes having a happy SQL box in a short amount of time very doable.

Wednesday, October 12, 2016


These are my notes on troubleshooting with wait stats:

It's not my favorite approach to correcting issues, as this is more reactive than proactive.  But sometimes its necessary to be able to dig through the wait stats and discern why it is the query is taking forever.

Things to keep in mind:
  • When SQL Server needs to update data, it will need to pull the clustered index and ALL nonclustered indexes into data. Even triggers and others items will be brought in.
  • NOT ALL WAITS ARE BAD – They need to be seen as excessive or impactful
Query Statuses
  • Running
  • Suspended
  • Waiting for something
  • "Yielding" so others can run
  • Runnable
  • Ready to run, but waiting to do so
  • Needs someone else to yield
  • Sleeping
  • No work to do
  • Waiting for instruction from the app

When waiting SQL Server will collect what you are waiting on
PAGEIOIOLATCH_xx – Waiting to ftech pages from storage into memory
SOS_SCHEDULER_YIELD – Task has 'voluntarily' yielded to another task so it can complete.  Basically the query is waiting for SQL Server to say 'Go ahead'
PAGELATCH_xx – Query might be waiting for the public toilet to have enough bandwidth for it. Protecting data in memory from inconsistency.
ASYNC_NETWORK_IO – Processing of the query is done.. This usually means that SQL Server is trying to send the results of the query to the client and the network limitations or client app limitations are restricting it.
LCK_xx – If updating data these waits will be created in order to maintain data integrity – isolating transactions
LATCH_xx – Protecting memory from other transactions.
WRITELOG – Flushing the transaction to storage as it commits
ASYNC_IO_COMPLETION – All committed transactions from the dirty pages in memory are asynchronously committed to the database
CX_PACKET – When a query goes parallel, some of the threads might take longer than others. This means its waiting for everyone to finish
Examples of problems:
Example
Issues Caused
Type of waits you might see
Implicit Conversions Big Index Scans (more IO and CPU) PAGEIOLATCH_xx, SOS_SCEDULER_YIELD
Missing Critical Indexes Big Index Scans (more IO and CPU) PAGEIOLATCH_xx, SOS_SCEDULER_YIELD
Cursors (the devil) Lots of tiny writes – ants mobbing the tlog WRITELOG
Duplicate Indexes Clog up your memory and cause extra IO PAGEIOLATCH_xx, ASYNC_IO_COMPLETION
"After" triggers (Satan) causing cascading deletes and updates on foreign keys Make transactions take longer, Secretly uses escalated isolation level to cascade deletes to child tables LCK_xx (which can lead to a large backlog of queries which often lead to CPU and Storage Waits
App Running giant reports More data sent to app server than it can handle ASYNC_NETWORK_IO

Refer to waits and stats by Paul Randal for an excellent method of collecting these (Create a table with a Date column (default getdate()) and Insert to exec on a schedule (every five minutes or hour whatever you prefer.)

Or you can also use the handy dandy sp_whoisactive and a table with a loop around it:
Brent Ozar: Who is Active to a Table
 I actually do this.

Wednesday, September 21, 2016

Filtered Indexes on SQL Server


Filtered Indexing can be a good strategy to lower the amount of data you are looking at.  Lets say you have a table, like we do, called Members which has 50 million records in it.  Lets say you know that there is an active flag in it, and that for 16 million of those records the flag is set to one and the rest are zero.  If you have a query that is grabbing the Last and first name, AND it wants Active = 1, when then this type of index might be for you:

CREATE INDEX IX_Members_byLastNameFirstName_filtered on Members
(
    LastName,
    FirstName,
    Active
)
Where (Active = 1)

Now, there are some caveats, and that's what I want to point out in this blog post.  Looking at the Index I have included a column that might not really need to be in there, but the optimizer might want it there... Or not... That comes down to testing.. And sometimes it can be really frustrating when SQL Server just doesn't use your filtered indexes for whatever reason.  There are plenty a posts out there in Blog land about these caveats so feel free to Internet Out -- 

What I want to focus on here are some important settings that need to be in place before running down this road towards the filtered index party:

SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
SET ARITHABORT ON
SET CONCAT_NULL_YIELDS_NULL ON
SET QUOTED_IDENTIFIER ON
SET NUMERIC_ROUNDABORT OFF

Now it's possible that your sprocs may have some settings that don't fall inside these restrictions. I found several that had ANSI_NULLS off at the time, and here is what I cobbled together to get that list:



SELECT 
    a.ROUTINE_NAME, 
    a.ROUTINE_TYPE, 
    OBJECT_DEFINITION(object_id(a.ROUTINE_NAME)) as SprocText, 
    b.uses_ansi_nulls 
FROM 
    INFORMATION_SCHEMA.ROUTINES as a 
JOIN sys.sql_modules as b 
ON.routine_name = object_name(b.[object_id]) 
WHERE 
    OBJECT_DEFINITION(object_id(ROUTINE_NAME)) LIKE '%Table_With_Filtered_Index%' and b.uses_ansi_nulls = 0 


Thursday, September 15, 2016

Entity Framework Woes


I CAN HAS DYNAMIC SQLZ PLZ??

Dynamic SQL is a serious issue for Entity Framework.  Don't believe me?

Look herehere, and here for some non-answers and more frustration.  I did dig further and we tried some other things, but nothing was 'easy' and just fell into place.

I even tried the SET FMTONLY OFF, to no avail.

Whats the real issue here?  Entity framework needs to be able to scan the stored proc to get the values it returns to create a complex object.  That's all we are trying to do.  If you put the stuff in a stored procedure inside of a dynamic sql, Entity Framework can't see it, and then the Software Engineers (some of them anyway) cry.  A real solution is to build out the complex type by hand.  We all know that many computer programmers are pretty freaking lazy (myself included) so this 'by hand' term is not in our dictionary. (Another solution would be to program a thing that builds out the complex type, but i digress.)  Sometimes people let their laziness get the better of them and we get these abominations:


SELECT TOP 100
 m.Id,
 c.CompanyName AS ClientName,
 m.FirstName,
 m.LastName,
 m.MiddleName,
 m.Address1,
 m.Address2,
 m.City,
 m.Suffix,
 m.StateProvince,
 m.PostalCode,
 m.HomePhone,
 m.Email,
 m.Active,
 m.MemberShipNo
FROM Members m
WHERE (@MembershipNumber IS NULL OR (m.MemberShipNo LIKE @MembershipNumber + '%'))
 AND (@FirstName IS NULL OR (FirstName LIKE @FirstName + '%'))
 AND (@LastName IS NULL OR (LastName LIKE @LastName + '%'))
 AND (@ZipCode IS NULL OR (m.PostalCode LIKE @ZipCode + '%'))
 AND (@PhoneNumber IS NULL OR (m.HomePhone LIKE @PhoneNumber + '%'))
 AND (@EmailAddress IS NULL OR (m.Email LIKE @EmailAddress + '%'))

Don't do this because of this.  Every time you do, you murder a kitten and risk being chased by a DBA with a hatchet.

So the following is what we did -- we broke one of the statements out of the dynamic SQL and stuck it at the top of the stored procedure like so:

 SELECT DISTINCT TOP 2500  
 m.[Id],
 m.[ClientId],
 m.[MemberShipNo],
 m.[Active],
 m.[SignedAuth],
 m.[EnrollmentDate],
 m.[SSN_NIN],
 m.[DOB],
 m.[FirstName],
 m.[MiddleName],
 m.[LastName],
 m.[Address1],
 m.[City],
 m.[StateProvince],
 m.[PostalCode],                                                                                     
 m.[Country],
 m.[Void],
 m.[PackageId],
 p.[ProductLabel] AS PackageLabel,
 m.[VendorSelection],
 m.[OnlineCredit],
 m.[MinorProductID],
 r.[Id] AS CaseId
 FROM [dbo].[Members] AS m
 LEFT OUTER JOIN [dbo].[Packages] AS p ON m.[PackageId] = p.[Id]
 OUTER APPLY (SELECT TOP 1 r.[Id] 
     FROM [dbo].[Rq_FraudCase] AS r
     WHERE r.[MembersId] = m.[Id] 
     ORDER BY r.[Id] DESC) AS r
 WHERE m.[Void] = 0 AND 1=2
  END
-- Above sproc is what the Dynamic SQL After this returns

This almost got it, but now I am returning two sets of records to Entity Framework and it poops its pants. I thought why not put the 1=2 at the beginning?

IF 1=2
  BEGIN
 SELECT DISTINCT TOP 2500  
 m.[Id],
 m.[ClientId],
 m.[MemberShipNo],
 m.[Active],
 m.[SignedAuth],
 m.[EnrollmentDate],
 m.[SSN_NIN],
 m.[DOB],
 m.[FirstName],
 m.[MiddleName],
 m.[LastName],
 m.[Address1],
 m.[City],
 m.[StateProvince],
 m.[PostalCode],                                                                                     
 m.[Country],
 m.[Void],
 m.[PackageId],
 p.[ProductLabel] AS PackageLabel,
 m.[VendorSelection],
 m.[OnlineCredit],
 m.[MinorProductID],
 r.[Id] AS CaseId
 FROM [dbo].[Members] AS m
 LEFT OUTER JOIN [dbo].[Packages] AS p ON m.[PackageId] = p.[Id]
 OUTER APPLY (SELECT TOP 1 r.[Id] 
     FROM [dbo].[Rq_FraudCase] AS r
     WHERE r.[MembersId] = m.[Id] 
     ORDER BY r.[Id] DESC) AS r
 WHERE m.[Void] = 0
  END
-- Dynamic SQL below this sproc

And Entity Framework is now happy, and the Dynamic SQL runs without issue.  If someone comes along and tells me this borks the explain plan or something else horrible happens, I will investigate and change my tune.  For now, this is a good way to get a tool to allow us to run a stored proc with Dynamic SQL without needing to do tedious amounts of work to get there.

Tuesday, September 13, 2016

Interviewing Candidates


*Scowling face*
So, why do you think we should hire you?

We have recently gone through a string of interviews trying to find the perfect candidates for our company's vacant positions.  When I go through this exercise, I like to pay as much attention as possible to the what's going on and absorb as much as possible.  Interviewing is not a magic ability only endowed to a select few, it is a skill which takes practice to cultivate.

When I interview -- if there is any inkling of them doing programming I typically start off with something like the FizzBuzz problem.  What I typically see is either someone nails the FizzBuzz, or almost gets it, or they can't tie their shoes.  If they nail it or almost get it I move on to some stuff associated with their field -- whether its QA or SE etc.  I don't tend to spend too much time in their technical area as a lot of that can be branded into their brains the night before.  After that I have a short sheet with  SQL queries on it and ask the candidate to tell me if they are good or bad and why.  None of the queries are good, and they all send DBA's screaming into the forest at night.  Finally I enter into questions like the following:

  • Explain a mistake you recently made and what you did to correct it.  Also what will you do to keep from making that mistake in the future?
  • What makes a good (x)?
  • Tell me about a time when you realized something was critically wrong and what you did after that?
  • What new technologies are you excited about with regard to (their profession)?
  • What is the most innovative thing you have implemented?
  • What would you attribute your success to?
I could go on.. What I am generally trying to do here is get a gist of whether or not they can demonstrate a capacity to learn and adapt.  Past that they need to demonstrate the ability to work with others and be part of a team.  Ego's are really a rampant part of IT for some reason.  Somebody thinks they can code an app and suddenly they are a god to be worshiped.  It's wonderful that you can write code, but my five year old nephew can do that.  Working with others can be one of the more difficult aspects of working in IT. and It can eclipse even the most difficult of technical problems.  I attribute my success to that of the team of individuals I work with -- without them I cannot achieve my goals and be successful.

And since I wouldn't ask someone to do something I wouldn't do myself:

create procedure FizzBuzz 
as
begin
   declare @i int = 1, @string varchar(20)
   while (@i <= 100)
      begin
          select @string = ''
           if (@i % 3 = 0)
           begin
               select @string = @string + 'Fizz'
           end
           if (@i % 5 = 0)
           begin
             select @string = @string + 'Buzz'
           end
           if (@string = '')
           begin
             select @string = @i
           end

           select @string
           select @i = @i + 1
       end
end