Counter of Festivals

Ashok Blog for SQL Learners and Beginners and Experts

Thursday, 17 May 2012

DBA Role Hand Book Topic

Difference Between Database Mail and SQLMail

Database mail is newly introduced concept in SQL Server 2005 and it is replacement of SQLMail of SQL Server earlier version. Database Mail has many enhancement over SQLMail. Database Mail is based on SMTP (Simple Mail Transfer Protocol) and also very fast and reliable where as SQLMail is based on MAPI (Messaging Application Programming Interface). Database mail depends on Service Broker so this service must be enabled for Database Mail. Database Mail can be encrypted for additional security. SQLMail is lesser secure as it can encrypt the message as well anybody can use SMTP to send email. Additionally, for MAPI to be enabled for SQLMail it will require Outlook to be installed. All this leads to potential security threat to database server.
SQLMail, it is right time to upgrade to Database Mail.
To use SQLMail you will have to enable it with specific commands
.
EXEC sp_configure 'SQL Mail XPs', 1
GO
RECONFIGURE
GO

SQL SERVER – 2008 – Configure Database Mail – Send Email From SQL Database

In order to send mail using Database Mail in SQL Server, there are 3 basic steps that need to be carried out.  

                     1) Create Profile and Account 

                    2) Configure Email 

                    3) Send Email.

Step 1) Create Profile and Account:
You need to create a profile and account using the Configure Database Mail Wizard which can be accessed from the Configure Database Mail context menu of the Database Mail node in Management Node. This wizard is used to manage accounts, profiles, and Database Mail global settings

Step 2) Configure Email:
After the Account and the Profile are created successfully, we need to configure the Database Mail. To configure it, we need to enable the Database Mail XPs parameter through the sp_configure stored procedure, as shown here:
sp_CONFIGURE 'show advanced', 1
GO
RECONFIGURE
GO
sp_CONFIGURE 'Database Mail XPs', 1
GO
RECONFIGURE
GO
Step 3) Send Email:
After all configurations are done, we are now ready to send an email. To send mail, we need to execute a stored procedure sp_send_dbmail and provide the required parameters as shown below:
USE msdb
GO
EXEC sp_send_dbmail @profile_name='PinalProfile',
@recipients='test@Example.com',
@subject='Test message',
@body='This is the body of the test message.
Congrates Database Mail Received By you Successfully.'

After all validations of the parameters entered are done, certain stored procedures are executed and the mail is queued by Service Broker, read more at SQL SERVER – Introduction to Service Broker.
Database Mail keeps copies of outgoing e-mail messages and displays them in the sysmail_allitems, sysmail_sentitems, sysmail_unsentitems, sysmail_faileditems . The status of the mail sent can be seen in sysmail_mailitems table, when the mail is sent successfully the sent_status field of the sysmail_mailitems table is set to 1 which can again be seen in sysmail_sentitems table. The mails that are failed will have the sent_status field  value to 2 and those are unsent will have value 3.
The log can be checked in sysmail_log table as shown below:
SELECT *
FROM sysmail_mailitems
GO
SELECT *
FROM sysmail_log
GO

Status can be verified using sysmail_sentitems table.

 How to find missing indexes- SQL Server
As a developer its required to have proper index on tables for better performance.
But while number of person querying a table its not possible to track and analyze all queries.
In this case we need some metadata for all queries which shows us which queries does not have index and creating index can improve x% of performance.
Now its possible after SQL Server 2005
SQL Server has system tables which provides this metadata

sys.dm_db_missing_index_columns

sys.dm_db_missing_index_details

sys.dm_db_missing_index_groups

sys.dm_db_missing_index_group_stats


First we want to know stats of columns on which index can benenfit us.

sys.dm_db_missing_index_group_stats
It provides number of user_seeks and user_scans which can be benefitted from creating index. The high number of user_seeks and user_scan its more prone to new index.It also provides how much perfomance can be improved by avg_user_impact.Similary also provide info for system_seeks and system_scans and avg_system_impact for scans and seeks by system.We are generally more interested in user_seeks and user_scans.

sys.dm_db_missing_index_groups
It provides relationship between dm_db_missing_index_group_stats and dm_db_missing_index_details.

sys.dm_db_missing_index_details
It provides index column details for indexes suggested by dm_db_missing_index_group_stats.

sys.dm_db_missing_index_columns
Returns information for columns which are missing index.
Provides information for column name and column usage

Column usage
1. Equality -- Columns which are used for Equality like a.col1 = b.col1
2. InEquality -- Columns which are comared other than Equaltiy like a.col1 > b.col1
3. Included -- Columns which are not part of comparison but part of query like covering index

While creating index we should first pue equality columns , then in equality column and then Included if we want to create covering index


USE adventureworks

DECLARE @i INT

SET @i = 1

WHILE @i <= 10

BEGIN

SELECT *

FROM sales.customer

WHERE customertype = 's'

AND territoryid = @i

SET @i = @i + 1

END

Now we will look into system tables for this query to find suggested indexes

First we need to find suggested index and number of user_seeks/user_scans

SELECT md.object_id,

md.database_id,

mgs.unique_compiles,

mgs.user_seeks,

mgs.user_scans,

mgs.last_user_seek,

mgs.avg_total_user_cost,

mgs.avg_user_impact,

md.equality_columns,

md.inequality_columns,

md.included_columns,

md.statement

FROM sys.dm_db_missing_index_group_stats mgs

INNER JOIN sys.dm_db_missing_index_groups mg

ON mg.index_group_handle = mgs.group_handle

INNER JOIN sys.dm_db_missing_index_details md

ON mg.index_handle = md.index_handle

WHERE md.database_id = Db_id()

AND md.object_id = Object_id('sales.customer')



Here we can see that number of user_seeks are 10, avg_user_impact is 88.2. It means if we create index the performance can be 88% improved. Its great!Now we will look at equality coulmns which are [TerritoryID], [CustomerType] and inequality columns are null.

So we need to create index on [TerritoryID], [CustomerType].

Create index idx_customer_TerritoryID_CustomerType on sales.customer([TerritoryID], [CustomerType])

What is meant by LDF and MDF?


Microsoft® SQL Server maps a database over a set of operating-system files. Data and log information are never mixed on the same file, and individual files are used only by one database.

SQL Server databases have three types of files:

Primary data files
The primary data file is the starting point of the database and points to the other files in the database. Every database has one primary data file. The recommended file name extension for primary data files is .mdf.
Secondary data files
Secondary data files comprise all of the data files other than the primary data file. Some databases may not have any secondary data files, while others have multiple secondary data files. The recommended file name extension for secondary data files is .ndf.
Log files
Log files hold all of the log information used to recover the database. There must be at least one log file for each database, although there can be more than one. The recommended file name extension for log files is .ldf.

SQL Server does not enforce the .mdf, .ndf, and .ldf file name extensions, but these extensions are recommended to help identify the use of the file.

In SQL Server , the locations of all the files in a database are recorded in both the master database and the primary file for the database. Most of the time the database engine uses the file location information from the master database. For some operations, however, the database engine uses the file location information from the primary file to initialize the file location entries in the master database.

SQL Server files have two names:
logical_file_name is a name used to refer to the file in all Transact-SQL statements.
The logical file name must conform to the rules for SQL Server identifiers and must be unique to the database.
os_file_name is the name of the physical file.
It must follow the rules for Microsoft Windows NT® or Microsoft Windows® Me, and Microsoft Windows 98 file names.

Clustered Index

If you create a primary key on a table, then SQL Server will create an index for this primary key, this index will be a clustered index on the primary key(s) column(s).
For a clustered index, the actual data that comprises a database table is always stored on disk in a certain order according to the column (or columns) specified by the index. The data is physically arranged on the disk in this order. A clustered index contains all of the data for a table in the index, sorted by the index key, just like a phone book is sorted by name and contains all of the information for the person inline.
There can only be one index that is clustered for any table.

Advantage – Very efficient to retrieve data from a table.
[It would be faster to select the records in order of the Primary Key than to get them in any other order. This is because each record that needs to be accessed from disk is right next to the previous one, making the disk access more efficient.]

Disadvantage If we update a record and change the value of an indexed column in a clustered index, the database might need to move the entire row into a new position to keep the rows in sorted order. This behavior essentially turns an update query into a DELETE followed by an INSERT, with an obvious decrease in performance. A table’s clustered index can often be found on the primary key or a foreign key column, because key values generally do not change once a record is inserted into the database.

Non Clustered Index

Non-Clustered indexes are all the other indexes on a table that do not dictate the order that records are stored on disk.

Advantage
We can have many non-clustered indexes, as apposed to just one clustered index per table.
Creating the proper index can drastically increase the performance of an application. If we create indexes on those column which is mostly used in WHERE clause, GROUP By clause we can improve the performance of the sql query.

Disadvantage
The disadvantage of a non-clustered index is that it is slightly slower than a clustered index and they can take up quite a bit of space on the disk.
Another disadvantage is using too many indexes can actually slow your database down. On top of that, each time a page or database row is updated or removed, the reference or index also has to be updated.
 

What are the differences between a clustered and a non-clustered index?

  1. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
  2. A non clustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a non clustered index does not consist of the data pages. Instead, the leaf nodes contain index rows. 
  •  What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them.
    For SQL Server 2005:
    1 Clustered Index + 249 Nonclustered Index = 250 Index
    For SQL Server 2008:
    1 Clustered Index + 999 Nonclustered Index = 1000 Index

    Maximum no of columns to be indexed is 16 columns for any indexes.
     

  SQL Server Indexed Views

What is an Indexed View?

When you create a view, you add a stored query to the database that can be read in the same manner as a table. The view acts as a virtual table that requires no additional storage space, as the information is read directly from the tables.
A view's performance can be significantly worse than that of a table, particularly when the view reads from several, joined tables. To improve performance you can create indexed views. These are views that have a unique, clustered index applied. Indexed views can also use non-unique and non-clustered indexes but only if a unique, clustered index exists.
When the first index is added to a view, it is built using a b-tree structure similar to that of a table index. The b-tree includes the data from the columns that are present in the index. However, unlike a table index, all of the columns that are included in the view are also added to the data rows of the index. This can lead to large increases in the storage requirements for a database but improves the view's performance.

Limitations of Indexed Views

Indexed views have additional limitations to other, non-indexed views. The key restrictions are listed below. There are other limitations that are not described in this tutorial. You should refer to the Microsoft documentation for further details.
  • Indexed views must be created using the SCHEMABINDING option. This option links the view to the schema elements that it uses, disallowing changes to the underlying tables that would break the view. If the view references user-defined functions, these too must be created using the option.
  • Indexed views may only select columns from base tables. They may not retrieve values from other views. The tables must be in the same database as the view and must have the same owner.
  • The columns in the view must be explicitly listed in the view's query. You cannot use the asterisk character to specify that you wish to retrieve all columns from a table, so queries such as "SELECT * FROM MyTable" are not permitted.
  • Indexed views may include inner joins but outer joins are not permitted.
  • Any functions used in the view's query must be deterministic.

Adding an Indexed View to a Database

In this article we will add an indexed view to the JoBS tutorial database. This database has been constructed and populated with data throughout the course of the tutorial. If you do not have an up-to-date copy of the database, download the script from the top of this page and execute it to create a new version.

Adding a View

To create an indexed view, you must first create a standard view with the SCHEMABINDING option. In this case, we will use Transact-SQL to create a sample view in the JoBS database. The view will list all of the jobs that are linked to an annual contract, rather than those that were purchased as a single job. These rows are those linked to a contract with no renewal date. The view will return the job number, contract number, standard job name, price and the value of the linked contract. This view could be the source of a report that determines that profitability of contracts.
To create the view, run the following T-SQL statement. Note that the table names are expressed in the two-part format, as this is required when creating views with schema binding enabled.
CREATE VIEW ContractJobs WITH SCHEMABINDING AS
SELECT
    J.JobId,
    J.ContractNumber,
    SJ.JobName,
    SJ.StandardPrice,
    C.ContractValue
FROM
    dbo.Jobs J
INNER JOIN
    dbo.StandardJobs SJ
ON
    J.StandardJobId = SJ.StandardJobId
INNER JOIN
    dbo.Contracts C
ON
    J.ContractNumber = C.ContractNumber
WHERE
    C.RenewalDate IS NOT NULL

Creating a Unique Clustered Index

Once the view has been created, indexes may be added to it. The first index that you create must be a unique index and must be clustered. This index is used to store the data in the view and will be updated whenever the data in the underlying tables is changed. In the ContractJobs view, the JobId is the only suitable candidate key for the index. To create the index, execute the following T-SQL:
CREATE UNIQUE CLUSTERED INDEX IX_ContractJobs_JobId ON ContractJobs
(
    JobId
)

Creating Additional Indexes

Once a unique clustered index is present for a view, further non-clustered indexes may be created. For example, to add an index for the ContractNumber column, you could use the following command:
CREATE INDEX IX_ContractJobs_ContractNumber ON ContractJobs
(
    ContractNumber
)

 

SQL SERVER – Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes

This entire concept may appear very simple, but if you are working in the area of query optimization and server tuning, you will find such useful hints.
Before we start, let us understand the difference between Seek Predicate and Predicate. Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself.

Let us see an example where we will remove the bookmark lookup first using the covering index. On removing the bookmark lookup, it resulted in Index Scan, which is not good for performance. When Index Scan is converted to Index Seek, it provides  a significant improvement in performance.
Run following SELECT, which is based on the database AdventureWorks.
USE AdventureWorks
GO
-- CTRL + M Enforces Key Lookup
-- Try 1
SELECT NationalIDNumber, HireDate, MaritalStatus
FROM HumanResources.Employee
WHERE NationalIDNumber = 14417807
GO
Let us check the execution plan for the same. The execution plan consists of the key lookup because there are columns that we are trying to retrieve in SELECT as well in WHERE clause.
Let us create a covering index on this table HumanResources.Employee.
-- Create Non clustered Index
CREATE NONCLUSTERED INDEX [IX_HumanResources_Employee_Example] ON HumanResources.Employee
(
NationalIDNumber ASC, HireDate, MaritalStatus
) ON [PRIMARY]
GO
After creating above index, let us run the same SELECT statement again.
-- CTRL + M Removes Key Lookup, but it still enforces Index Scan
-- Try 2
SELECT NationalIDNumber, HireDate, MaritalStatus
FROM HumanResources.Employee
WHERE NationalIDNumber = 14417807
GO
Due to non-clustered index, Key Lookup is removed along with Nested Loops but Index Scan still remains, and this is not good for performance.
I tried to remove the scan, but my attempts were unsuccessful. I finally looked at the datatype of the NationalIDNumber. All this time, I was assuming that this datatype is INT, but on a careful check, I found that the datatype of NationalIDNumber is nvarchar(15).
In the SELECT statement, we were comparing the datatype of NVARCHAR to INT, and this was forcing the predicate operation while executing the query.
As discussed earlier, due to predicate operation, there has to be explicit conversion on the side of NationalIDNumber, which forces the query optimizer to not use the index and instead it has to scan the complete data in table to get the necessary data. This is not the desired solution. Index Scan reduces performance. The reason for this conversion is because I am using INT value in WHERE clause instead of NVARCHAR.
I changed my WHERE clause and passed STRING as the parameter instead of INT.
-- CTRL + M Removes Key Lookup and it enforces Index Seek
-- Try 3
SELECT NationalIDNumber, HireDate, MaritalStatus
FROM HumanResources.Employee
WHERE NationalIDNumber = '14417807'
GO
After running the query with the changed WHERE clause, the Index Scan is now converted into Index Seek.
Index Seek is definitely the most optimal solution in this particular scenario.
When the detail execution plan was checked, I found the following two notable points. First, the predicate is converted to seek predicate, which is the reason for performance improvement, as described earlier. Instead of scanning data in the table, Index Seek is performed. Second, as the datatype of the NationalIDNumber is NVARCHAR and the parameters are passed as VARCHAR, the conversion happens on the parameters instead of NationalIDNumber column, and this forces Index Scan to Index Seek.
If we pass the parameter as NVARCHAR instead of VARCHAR, the execution plan will remain the same, but CONVERT_IMPLICIT will not be required any more. Let us run the following query, which has NVARCHAR as the parameter.
-- CTRL + M Removes Key Lookup and it enforces Index Seek and no CONVERT_IMPLICIT
-- Try 4
SELECT NationalIDNumber, HireDate, MaritalStatus
FROM HumanResources.Employee
WHERE NationalIDNumber = N'14417807'
GO
The execution plan of the above query is very similar to that in which we had passed the parameter as VARCHAR.
Now let us check the execution plan for the same.
In the WHERE condition, the operators we have on both the sides of “=” are of NVARCHAR. NationalIDNumber and parameter passed – both are NVARCHAR, which has removed CONVERT_IMPLICIT. However, there are no changes in the execution plan.
In summary, when Key Lookup is removed and index seek replaces index scan, the performance is tuned up. Let us quickly compare the execution plan of the above four options. I have included the complete code here for easy reference.
USE AdventureWorks
GO
-- CTRL + M Enforces Key Lookup
-- Try 1
SELECT NationalIDNumber, HireDate, MaritalStatus
FROM HumanResources.Employee
WHERE NationalIDNumber = 14417807
GO
-- Create Non clustered Index
CREATE NONCLUSTERED INDEX [IX_HumanResources_Employee_Example] ON HumanResources.Employee
(
NationalIDNumber ASC, HireDate, MaritalStatus
) ON [PRIMARY]
GO
--WAITFOR DELAY '00:00:30'
-- CTRL + M Removes Key Lookup, but it still enforces Index Scan
-- Try 2
SELECT NationalIDNumber, HireDate, MaritalStatus
FROM HumanResources.Employee
WHERE NationalIDNumber = 14417807
GO
-- CTRL + M Removes Key Lookup and it enforces Index Seek
-- Try 3
SELECT NationalIDNumber, HireDate, MaritalStatus
FROM HumanResources.Employee
WHERE NationalIDNumber = '14417807'
GO
-- CTRL + M Removes Key Lookup and it enforces Index Seek and no CONVERT_IMPLICIT
-- Try 4
SELECT NationalIDNumber, HireDate, MaritalStatus
FROM HumanResources.Employee
WHERE NationalIDNumber = N'14417807'
GO
/* What is the reason for difference between Try 2 and Try 3?
Check the exeuction plan
*/
-- Drop Index
DROP INDEX [IX_HumanResources_Employee_Example] ON HumanResources.Employee WITH ( ONLINE = OFF )
GO

Let us look at the execution plan.
Summary points:
  • In general, Index Seek is better than Index Scan (I am not talking about it depends conditions)
  • Understand Predicates and Seek Predicates and see if you have only Seek Predicates.
  • In case of key lookup or bookmark lookup, see if you can create a covering index or included column index.
  • Use the datatype wisely even though there is no change in the resultset.

 Note:http://blog.sqlauthority.com/2009/11/09/sql-server-removing-key-lookup-seek-predicate-predicate-an-interesting-observation-related-to-datatypes/

 

Understanding SQL Server fixed database roles

By:   |   Read Comments (5)   |   Related Tips: More > Security
ProblemI know there are fixed database roles that come with SQL Server. How do I best use them within my installations? What should I watch out for?  In this tip we will cover each of the database roles and recommendations on when to and when not to use them.
SolutionWithin each database, SQL Server does have fixed database roles, ones which are standard and included in every database. These differ from the database roles you can create and use yourself in that they have pre-assigned permissions. The fixed database roles are:
  • db_owner
  • db_securityadmin
  • db_accessadmin
  • db_backupoperator
  • db_ddladmin
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter
Like with the server roles, let's look at each in turn.

db_owner
We'll start with the biggest role: db_owner. A member of the db_owner role can do anything inside the database. Now there is a difference between a member of the db_owner role and the dbo user. That difference is that if someone maps into the database as the dbo user, that person bypasses all security checks. An example of this is anyone who is a member of the sysadmin fixed server role. They map in as dbo. And as a result, they don't receive security checks.
If a user is not dbo but is a member of the db_owner role, it does receive a security check. Of course, unless you've explicitly used DENY to block access, that user can do what he or she wants. The DENY stops them cold (it does not stop dbo). However, a member of the db_owner role could remove the DENY, so effectively that person can do anything, even if you put roadblocks in place. Therefore, the db_owner role should be given out only when necessary. Some applications will require it, which is a headache, but rarely will actual people need it (unless the application is connecting using their credentials). So you should be able to keep a reasonable tight control over this role. Like sysadmin, which is returned as a member of every fixed server role if you use the IS_SRVROLEMEMBER() function, if you query for someone who is a member of the db_owner role to determine if that user is a member of any other fixed database role, it will return true, even if the user is not explicitly a member of that role. For instance, a user who is a member of db_owner but who is not a member of db_securityadmin will still return a 1 if you execute the following query:
SELECT IS_MEMBER('db_securityadmin');
Things to remember:
  • The db_owner role allows a user to do anything within the database.
  • DBAs who are already members of the sysadmin fixed server role come in as dbo and don't need this role explicitly granted to them.
  • Normal users should not be a member of this role.
  • Applications might require their user account to be a member of this role.

db_securityadmin
Like the securityadmin fixed server role, the db_securityadmin fixed database role manages security. In this case, it manages role membership (with the exception of db_owner) as well as permissions on securables. As a result, it's another role you want to keep a close eye on. Generally speaking, I've not seen a lot of folks use this role. Typically the DBAs manage security within the database and they're already coming in as dbo. There may be some rare instances where it would be used, but I would flag those as exceptions. Therefore, if you see any members of this role within a database, it's worth checking out.
  • The db_securityadmin role can manage role membership and permissions on securables.
  • Again, since DBAs usually manage security and are usually coming in as dbo, this role is little used.
  • Normal users should not be a member of this role.
  • Applications should tend not to need this role.
  • Since it's little used, you should audit its membership for exceptions.

db_accessadmin
The db_accessadmin role also manages security, but handles access to the database, as the name implies. The db_accessadmin role grants, denies, or revokes permission to enter the database for logins. Combined with db_securityadmin, and you can completely manage security into and throughout the database. Like db_securityadmin, though, access into the database is usually handled by DBAs. If they aren't members of the sysadmin fixed server role, they are members of the securityadmin fixed server role. As a result, this role should also be rarely used.
  • The db_accessadmin role can allow access into or block access to the database for logins.
  • Again, since DBAs usually manage security and have an appropriate server-level role, this role is little used.
  • Normal users should not be a member of this role.
  • Applications should tend not to need this role.
  • This is another role you should audit for membership exceptions.

db_backupoperator
The db_backupoperator allows a member of the role to take backups of the database. However, it's only going to allow native backups, as in the standard backups through SQL Server itself. If you're using a third party product, chances are it is usually the methods which allow for high speed backups. Unfortunately, these methods require the login executing them to be a member of the sysadmin fixed server role. As a result, this role tends to be of limited usefulness. Add to it that you're backing up to a local drive, and it's rare to see a non-DBA having this level of access, even in a development system. Because of all these things, this is another role that is typically not used much.
  • The db_backupoperator role allows a user to take backups of the database.
  • Most 3rd party backup utilities utilize methods that require sysadmin rights, which this doesn't give.
  • Another role that is little used because this functionality is usually handled by DBAs or a service account.
  • Normal users should not be a member of this role.
  • Applications should tend not to need this role, though I have seen exceptions.

db_ddladmin
The db_ddladmin is another powerful role because it allows a user to create, drop, or modify any objects within a database, regardless of who owns it. So a user could alter a stored procedure owned by dbo, for instance. This role is sometimes given to developers on non-production systems as they built custom applications. However, there is typically no reason anyone should be a member of this role on a production database. One thing the db_ddladmin does not do is allow the user to alter permissions on the objects. So a member of this role can create or modify the object, such as a stored procedure, but not alter the permissions on it unless he or she is the owner.  So, for instance, a member of this role could create a stored procedure in a schema owned by dbo, but couldn't grant the ability to execute it.
  • The db_ddladmin role can create, drop, and alter objects within the database, regardless of who the owner is.
  • The db_ddladmin role cannot alter security.
  • It is not unusual to grant this role to developers in a non-production environment.
  • Normal users should not be a member of this role.
  • Applications should not need this role.
  • No one should normally be a member of this role on a production database.

db_datareader
The db_datareader role allows a user to be able to issue a SELECT statement against all tables and views in the database. DENY for a user (or a role the user is a member of) will still block the SELECT, however. But if there are no permissions set, whatsoever, the user will have the ability to SELECT against the table or view. The catch with this role is that the permission is implicit. That means if you query sys.database_permissions, you will not see any permission granted, either to the db_datareader role or directly to the user. Therefore, if you need to audit for everyone who has SELECT access to particular tables in a database, you'll have to query the membership of this group via the use of sp_helprolemember:
EXEC sp_helprolemember 'db_datareader';
It is not unusual to see the db_datareader role used in databases. It's an easy way to grant SELECT permissions to everything without having to worry about it. However, due to the fact that it uses implicit permissions, I prefer to create a user-defined database role and explicitly grant permissions. With that said, here are things to remember:
  • The db_datareader role gives implicit access to SELECT against all tables and views in a database.
  • In SQL Server 2005 and up, an explicit DENY will block access to objects.
  • It is not unusual to see this role used in production for developers.
  • It is not unusual to see this role used in production for normal users.
  • Applications will occasionally need this role.
  • Creating a user-defined database role and explicitly defining permissions is still preferred over the use of this role.

db_datawriter
The db_datawriter role is like the db_datareader role in that it gives implicit access to tables and views within a database. It also can be blocked by an explicit DENY for the user or for a role the user is a member of. Unlike db_datareader, however, db_datawriter gives INSERT, UPDATE, and DELETE permissions . Again, since the permission is implicit, you will not see these rights show up in sys.database_permissions. And like with db_datareader, you'll have to check the membership of this role to determine actual permissions in the event of an audit.
  • The db_datawriter role gives implicit access to INSERT, UPDATE, and DELETE against all tables and views in a database.
  • In SQL Server 2005 and up, an explicit DENY will block access to objects.
  • Typically developer are not members of this role in production unless all users are.
  • While less common than with db_datareader, it is not all that unusual to see this role used in production for normal users.
  • Applications will occasionally need this role.
  • Creating a user-defined database role and explicitly defining permissions is still preferred over the use of this role.

db_denydatareader
Unlike the previous two roles, db_denydatareader denies access. In this case, the db_denydatareader is the same as having a DENY for SELECT on all tables and views in the database. Because DENY trumps everything else, this is not a role I've seen used frequently. If there are no permissions for a given user on an object, such as the user has no SELECT permissions on a table, then SQL Server blocks access. Therefore, if a user doesn't have SELECT permission on TableA, then the user cannot successfully issue a SELECT query against TableA. An explicit DENY is not needed. And since this affects all tables and views, that adds to the reason this database role is typically not used. And like db_datareader and db_datawriter, the DENY is implicit, meaning you'll have to query for membership in this role to determine who is affected.
  • The db_denydatareader role is denied access to SELECT against any table or view in the database.
  • Typically this role is not used.
  • The DENY is implicit.
  • Creating a user-defined database role and explicitly defining permissions is still preferred over the use of this role.

db_denydatawriter
Wrapping up our list of roles is db_denydatawriter. The db_denydatawriter has an implicit DENY on INSERT, UPDATE, and DELETE for all tables and views in the database. Again, this is not a role that sees much use, for the same reasons as db_denydatareader.
  • The db_denydatawriter role is denied access to INSERT, UPDATE, or DELETE against all tables and views in the database.
  • Typically this role is not used.
  • The DENY is implicit.
  • Creating a user-defined database role and explicitly defining permissions is still preferred over the use of this role

 

Moving Database Files From One Drive to Another

In practice, database files grows everyday. Sometimes it occupy the complete disk and we may end up in unsufficeint mamroy - leading to unexpected results.

In this blog, I'll explain how to transfer database files from one drive to another. To explain, I'll create a test database TestDB. Here is the command:

USE [master]
GO

IF DB_ID(N'TestDB') IS NULL
BEGIN
CREATE DATABASE [TestDB] ON PRIMARY
(
   NAME = N'TestDB'
   ,FILENAME = N'C:\MSSQL\Data\TestDB.mdf'
   ,SIZE = 3MB
   ,MAXSIZE = 2048GB
   ,FILEGROWTH = 1024KB
)
LOG ON
(
   NAME = N'TestDB_log'
   ,FILENAME = N'C:\MSSQL\Data\TestDB.ldf'
   ,SIZE = 3MB
   ,MAXSIZE = 2048GB
   ,FILEGROWTH = 10%
)
END
GO

Now check existing files location:

USE TestDB
GO
SELECT name, physical_name, state_desc
FROM TestDB.sys.master_files
WHERE database_id = DB_ID(N'TestDB')
 
 
 
 
 
 
 
 
 
Now the database is created, you can create some tables and enter some data into the database, if you wish, Otherwise proceed like this:
 
Step1: Make the database OFFLINE.

USE [master]
GO
ALTER DATABASE TestDB SET OFFLINE

Step2: Move file Physically
Now you need to move that file physically from existing folder to the new the location. Open the parent folder (Here 'C:\MSSQL\DATA') , You can see both mdf and ldf files', make sure that you cut the appropriate file, in this example it is the Log file. Cut that "TestDB_log.LDF" file and paste it on "D:\MSSQL\Data"
 
Step3: Update the system reference
Once you move the file physically , you need to update the system reference using the ALTER DATABASE .. MODIFY FILE Command as shown below:
 
ALTER DATABASE TestDB
MODIFY FILE (
   NAME ='TestDB_log'
   ,FILENAME = 'D:\MSSQL\Data\TestDB.LDF'
)

Step5 : Make database ONLINE
Last step is to make database online as shown below:

ALTER DATABASE TestDB SET ONLINE
GO

We are done. Now check existing files location using query mentioned above.