Counter of Festivals

Ashok Blog for SQL Learners and Beginners and Experts

Friday 7 March 2014

Table concepts on Table

   Table concepts


Table concepts include Enable constraints/triggers Disable constraints/triggers and add new column and altering column to modify existing column.

Disabling All Constraints on a Table


ALTER TABLE CustomerAddresses NOCHECK CONSTRAINT ALL
To Re-Enable the constraints, issue use the same command, substituting NOCHECK with CHECK, as follows:

ALTER TABLE CustomerAddresses CHECK CONSTRAINT ALL

Disabling Individual Constraints

ALTER TABLE CustomerAddresses NOCHECK CONSTRAINT FK_CustomerAddresses_Customers
To Re-Enable the constraints change NOCHECK to CHECK:

ALTER TABLE CustomerAddresses CHECK CONSTRAINT FK_CustomerAddresses_Customers

Disabling Individual Triggers on table

Diable Trigger triggername on tablename
 
DISABLE TRIGGER Person.uAddress ON Person.Address; 

Enabling  Individual Triggers on table

 Enable Trigger triggername on table 
 
 ENABLE Trigger Person.uAddress ON Person.Address;  

Enabling all triggers that were defined with the same scope

ENABLE Trigger ALL ON ALL SERVER; 

Disabling all triggers that were defined with the same scope

DISABLE Trigger ALL ON ALL SERVER;

Adding New column in Table

ALTER TABLE [dbo].[defect] ADD [defect_id] [bigint] IDENTITY(1,1) NOT NULL

Altering already existing column in Table

ALTER TABLE [dbo].[defect] ALTER COLUMN [completion_hours] [nvarchar](50) NULL
Primary key defined at the table level: 
create table DepartmentManager(
Departmentmanagerid int

identity(1,1),empid int, Departmenttid int, primary key(Departmentmanagerid)
)

Adding Primary Key constraint using Alter table command

Alter Table employeemaster add constraint pk_EmpPrimaryKey primary key(empid)

Dropping a primary constraint from a table

Alter Table tablename Drop constraint name

alter table DepartmentManager drop constraint pk_EmpPrimaryKey
Adding Foreign Key and dropping Foreign key

Alter Table DepartmentManager Add Constraint Fk_Departmenttid Foreign Key(Departmenttid)references Department (Departmenttid)

Alter Table DepartmentManager Drop Constraint Fk_Departmenttid
Adding Unique Key and dropping Unique key

Alter table tablename add constraint constraintname Unique (Columnname)

alter table employeesalarymaster add constraint Uni_empid unique (empid)

Adding Check constraint
 
CREATE TABLE employeesalarymaster(empsalaryid int identity(1,1),Empid int, Empsalary numeric(10,2) not null,SalaryDt datetime,check (empsalary)5000))

 Note: - The Check constraints must be a Boolean expression that can be evaluated 

using the values in the rows being inserted or updated.


The power of SQL Server amaze me all the time, to see the power of SQL Server all you need is a little creativity and knowledge of the syntax. Some times derived tables can be more useful, likewise, you need to create a view for a single query and then you want to use it within another query and after that the view should be dropped because no use of the view after that action, then here derived tables can help you and benefits you by saving to create you a catalog entry in the form of view in the server.
For repeated queries, a SQL derived table used multiple times performs well with cached definition. It will not down your performance. A SQL derived table differs from a temporary table in that a SQL derived table exists only for the duration of the query, while a temporary table exists until the server is brought down and also it uses lot of disk space than a derived table in the temdb database. So it's better to make use of Derived tables wherever possible by eliminating the one time use views and by eliminating the temporary tables. What I think is it will improve the performance of your query.

Fixing the error: The multi-part identifier ... could not be bound in Join statements

Thursday 6 March 2014

Important Considering When Moving a database between two SQL Server instances

Important Considering When Moving a database between two SQL Server instances :

Ref:

http://www.karaszi.com/sqlserver/info_moving_database.asp

http://msdn.microsoft.com/en-us/library/ms187580.aspx

Overview
There are many reasons for moving a database from one SQL Server instance to another instance is a pretty frequent task, for example:

  • Upgrade by restoring into a instance which is a more recent version of SQL Server
  • Move to a new instance which has a system collation that is right for the database
  • Server consolidation, reducing number of instances
  • Using some database-level HA/DR technology (like log shipping or database mirroring) where a failover moves the database to a different server
However, the database is not all that you want to worry about. There are things outside the database, which you also should consider. The purpose of this article is to be an aid for those situations.


Versions
This article is not written for some specific version of SQL Server in mind, but is generally targeted towards more recent versions. For instance, where referencing to a GUI, we will assume SQL Server Management Studio and not Enterprise Manager. Also, we will use the Catalog Views instead of old-style system tables.


Plan ahead!
For future versions of SQL Server, you really should look into the "Contained Database" concept. The whole idea is to make the database less dependent of what is outside the database. We are likely to see a first implementation of this concept in SQL Server 11, and hopefully this will evolve and mature over time. Here you find Web-based Books Online documentation for Contained Databases. Note that at the time of writing this article, this functionality is still in development, so functionality as well as documentation can change when SQL Server 11 is released.


Scope and purpose
The main purpose of this article is to list things that your database might use, which lives outside that database. Focus is on listing these things, so you don't forget about them. Secondary is to elaborate or point to resources that has more information about that particular topic. This article do not focus on going from a lower to a higher version of SQL Server. The actual move process is of course the same, but there are additional considerations when upgradeing to a higher version.
The queries below are in general meant to be executed from your database context.


Move or copy?
These operations have so much in common so I will not treat them differently in the rest of the text. Basically, move is the same as copy, but you also then delete the database from the source server (after you are satisfied that everything is fine on the new server, of course!). However, when moving a database, there will be things left behind on the old server, which you probably want to clean-up. Here are things that can be "left behind":

  • Logins which aren't used in other databases.
  • SQL Server Agent jobs which references the database.
  • Backup history information. This is not that critical since you hopefully have other means to trim this (so over time this will be aged out automatically).
  • Backup files still residing on the hard drive. Of course, there might be a good idea to keep some of these as an extra fallback measure.
  • Backup devices (as seen in Server Objects, Backup Devices and sys.backup_devices).
  • ... and checkout the rest of this article. Things you want to bring over to the new server will also be things you might want to remove from the old server.

How to do the actual move
Unless you have some HA/DR technology where moving the database is part of that technology, then the first obvious step is the database itself.

I prefer BACKUP DATABASE and RESTORE DATABASE. For the backup, consider using the COPY_ONLY option of the BACKUP command if you are doing a copy, or the NORECOVERY option if you are doing a move. The restore process might require you to use the MOVE option to specify some alternate filename for any of the database files to be created. Say you have a backup file named mydb.bak, and want to restore a database from this backup file. You might not know anything about what databases are included in this backup file, what path their database files were using etc. Start by investigating how many backups are in this file, types of backup and the database name for each backup in there:
RESTORE HEADERONLY FROM DISK = 'R:\mydb.bak'
You will see one row per backup contained in this backup file. Based on information above, you now want to see each database file used by that database, for the desired database in the backup file. Use the value from the "position" column from above, in the FILE option:
RESTORE FILELISTONLY FROM DISK = 'R:\mydb.bak'WITH FILE = 1
You will see one row per database file. The two columns you want to focus on are LogicalName and PhysicalName. Use the values from these columns in the MOVE option for the actual RESTORE command:
RESTORE DATABASE mydb FROM DISK = 'R:\mydb.bak'WITH
 MOVE 'mydb' TO 'C:\mydb.mdf',MOVE 'mydb_log' TO 'C:\mydb_log.ldf'
,REPLACE

You might want to read about the REPLACE option for the RESTORE command in Books Online. Be careful if you use it and the destination database exists - so you don't overwrite the wrong database by mistake!

Another option is to detach the database using sp_detach_db and then attach it using CREATE DATABASE ... FOR ATTACH; or detach and attach using the GUI.
This will actually detach the database from the source server! Many things can go wrong here, which is why we prefer backup and restore. The backup process is online and since the source database was never made un-available in the first place, you don't risk destroying anything in case of mistakes, data corruption, or whatever other things that can happen (you'd be surprised). You might argue that it takes time to produce a backup file, but you should have such a backup anyhow! If that isn't recent enough, then just do an extra differential backup or a log backup. In addition, the size of the backup file(s) to transfer to the new instance is likely smaller than the combined size of the database files (backup do not include unused extents). And if you are on SQL Server 2008 EE or 2008 R2 SE, then you can also use the COMPRESSION option of the backup command to make them even smaller. Furthermore, you can even split the backup into several files by using the striping option for the backup command (just specify more than one backup file).

Yet another option is to use some tool that scripts all objects in the database, export the database, then uses the scripts to create the objects and import the data. This has the drawback of not moving the database at the binary level; making is a more fragile process than backup/restore or detach/attach (in case something goes wrong with the scripting process). SSIS has a "Transfer SQL Server Objects" task, which can also be used from the "Copy Database Wizard". SSIS also has other task types that base be useful here, some will be mentioned below.

If this is a move, then you don't want to have users doing modifications in the source database after you took a copy if it (regardless of which of above methods you are using). This includes possible open transactions, that later on can become committed - you don't want to manually re-integrate such modification in the destination database! You can set the database to read-only or single user before you for instance do the database backup.


Logins
Stored in the master database
Main catalog views: sys.server_principals, dbname.sys.database_principals

Make sure you have the necessary logins with the same name, and for a SQL Server login also password and SID on the destination server.
This is probably the most obvious and known issue. A user in a database is "mapped" to a login. Inside the database, you can list the users through the sys.database_principals catalog view. Pay special attention to the sid column. The user is connected to a a login, sys.server_principals which also has a sid column. The sid columns is the mapping from the user to the login.

For Windows logins, the SID is produced by the AD or SAM database, and unless you move the database to a SQL Server in a different domain, then all you have to do is to create the same Windows login on the new server.

For a SQL Server logins, SQL Server will invent a sid when you create the login. So, if you just create the login on the new server, the sids won't match. If you use the GUI and look at the login on the new server, you won't see it mapped to the database. If you use the GUI to list the user inside the database, you won't see it mapped to a login. This is what we call an "orphaned user". So, SQL Server logins and users for those logins require a bit more attention than Windows logins. This query will list orphaned users in the database (it doesn't differentiate a deliberate user without login in source db, easiest is probably to check them manually):
SELECT *FROM sys.database_principals AS dWHERE NOT EXISTS
   (
  
SELECT *
  
FROM sys.server_principals AS s
  
WHERE s.sid = d.sid
  
)
AND
type_desc = 'SQL_USER'AND name NOT IN('guest', 'INFORMATION_SCHEMA', 'sys')


If the login already exist on the new server, then you can adjust the sid for the user inside the database so it matches the right login. (The old-fashioned way to do this is using sp_change_users_login, but as of SQL Server 2005 sp2, we are recommended to use the ALTER USER command instead.) Here's an example of using the ALTER USER command to map to an existing login:
ALTER USER Joe WITH LOGIN = Joe

If the logins do not exist in the destination instance, then I suggest you use a utility to script them on the source server and from that script create them on destination server; making sure thay have the same sid and also password. There is a "Transfer Logins" SSIS task, but that doesn't carry over the password, so instead I suggest you use the sp_help_revlogin stored procedure. Use this if the source server is 7.0 or 2000 or this if the source server is 2005 or more recent. These procedures will also take care of server roles assignment and also carry over Windows logins.

Make sure that you have covered necessary server roles and permissions assigned to logins.

Also, your application might use some login without that login existing as a user in the database. A login which has the sysadmin server role can access all databases (using the dbo user), and your application might be using such a login. Your application won't be able to login using such login if that login doesn't exist on the destination server or if it exist but with a different password.


The application
Obviously you want to make sure that the application attempts to connect to the correct server and also that it can connect to that server.

  • Make sure you go through all the places where your application has the name of your SQL Servers. This will typically be in connection strings, but can also be in ODBC configurations or other configuration files.
  • Make sure that the application can reach the new SQL Server, things like firewalls and DNS entries.
  • If you use Database Mirroring, I've noticed that you might need some changes in the application code to fail over gracefully, especially if you use connection pooling. The app can get a connection from the pool, but the connection is closed because you had a failover. The app should react on this and reset the pool (so we get new fresh connections with the correct, active, server name), or something similar.

Collation
SELECT
 SERVERPROPERTY('Collation') AS server_collation,DATABASEPROPERTYEX(DB_NAME(),'Collation') AS database_collation
The database has a collation, which among other things serves as default collation when you create tables. The instance also has a collation, which acts as a default collation for when a new database is created. Both can be overridden using the COLLATE clause. You want to make sure that moving the database from one instance to another won't cause any collation related problems. The most famous such problem is if you have different collation in the database than the instance, and create a temp table so the temp table has the system database collation and then compare some column in that temp table with a column in your database. Such comparison can cause a "collation conflict" error. Imagine that on the source server you had the same collation but the destination server has a different collation for the system databases: this is just such a situation when this problem can bite you. You can check the server collation using SERVERPROPERTY('Collation'). You see the database collation in sys.databases and also in the output from RESTORE HEADERONLY.


Database ownership
SELECT
 (SELECT SUSER_SNAME(owner_sid) FROM sys.databases WHERE database_id = DB_ID()) AS owner_according_to_master,(SELECT SUSER_SNAME(sid) FROM sys.database_principals WHERE name = 'dbo') AS owner_according_to_the_app_database
The login who performs the restore, attaches or creates the destination database will be the owner of the database, as seen from the instance (master database) perspective. You can see this in the sid column in sys.databases. However, if you use a binary method to move the database (restore or attach), then the owner (according to the database) will be the same as before (possibly an orphaned user, see above). A problem is if the login who used to own the database (which on the old instance can access the database as the dbo user, being the owner) is no longer the owner on the new instance. I.e., make sure that the login that used to own the database will be able to access it in the destination instance, with proper privileges. I typically have sa as owner for most of my databases. Here's an example for how to change owner for a database:
ALTER AUTHORIZATION ON DATABASE::AdventureWorksDW2008R2 TO sa


Service Broker
You will have to enable Service Broker inside the database after attach or restore. This is of course only relevant if you use Service Broker. You cannot have any connections to the database when you enable (or disable) broker.


Linked Servers
Stored in the master database, can be scripted using SSMS, listed in SSMS under Server Objects.
SELECT * FROM sys.servers WHERE is_linked = 1


Database Mail configuration
Stored in the msdb database.
It is possible (but pretty rare) that code executed in your databases also uses sp_send_dbmail to send email. If that is the case, you would have to enable and setup the Database Mail functionality on the destination instance.


Operators
Stored in the msdb database, can be scripted using SSMS.
SELECT name FROM msdb.dbo.sysoperators It is very unlikely that your code extracts an email address based on an operator name using above table in msdb and then uses that email with for instance sp_send_dbmail. Unlikely, but possible.


Jobs
Stored in the msdb database, can be scripted using SSMS.
SELECT name FROM msdb.dbo.sysjobsWHERE name NOT IN ('syspolicy_purge_history')
A job isn't tied to a specific database, so you want to go over all jobs to determine which ones you need to carry over to the new server.


SQL Server Agent Event Alerts
Stored in the msdb database, can be scripted using SSMS
SELECT name FROM msdb.dbo.sysalerts Alerts aren't tied to a specific databases (well, it can be limited to a certain database...), so your app won't stop working if you don't carry over your alerts. But it is likely that you also want Alerts on the destination server if you have such on the source server.


Maintenance Plans
Stored in the msdb database.
SELECT name FROM msdb.dbo.sysmaintplan_plans Probably easiest to just re-create these from scratch on the new instance. Hopefully you have a standard for how to setup maint plans (if you use maint plans) so it is a quick process to create new plans or add this database to the current plans if such already exist on the destination server.


User-defined error messages
Stored in the master database.
SELECT *FROM sys.messagesWHERE message_id > 50000
I don't see user-defined error message much used, but it is easy to check and see if you have any messages with message_id > 50000.


Replication
Stored all over the place
What to do depends on whether you move a subscription database or a publisher database. But you do want to stop and think for a while, and probably remove that part of the replication setup and re-do it.


Filestream
SELECT * FROM sys.filegroups WHERE type_desc= 'FILESTREAM_DATA_FILEGROUP'SELECT * FROM sys.database_files WHERE type_desc= 'FILESTREAM'
The actual filestream data is brought over automatically if you use BACKUP and RESTORE. You are in for more and messier work if you use some other method, so don't even go there - use BACKUP and RESTORE!. You also need to make sure that the destination instance is configured to support filestream (both the "SQL Server Configuration Manager" tool and sp_configure).

Other things which might be relevant, in no particular order:

  • Server Audit
  • Policies
  • Data Collection
  • Resource Governor
  • Backup Devices
  • Endpoints
  • DDL triggers at the server level
  • Credentials
  • Change Data Capture
  • Certificates in the master database that are used to sign procedures or assemblies
  • Consider doing things such as updating statistics, check database integrity etc. after the move.
  • If you go from a production to a QA/test/dev environment, then there might be requirements to obfuscate some data / keep some data in the destination database.

Tricky things
Some things are particularly tricky, and I want to take the opportunity to point them out to you. I.e., if you have any of these situations, expect more work that just a standard "backup/restore-and-bring-over-the-other-stuff" (which is basically the scenario for this article). I will not go into details on how to handle these situations, I just point them out for you, so you can warm up your favourite search engine, and pour that nice cup of tea. 

  • You need to change the collation in the database after move, on the destination instance. Changing the collation of a database is not a simple task. Here's an aticle and a powershell script that can be useful.
  • The destination SQL Server is of a lower version than the source SQL Server. Note that this includes going from SQL Server 2008 R2 to SQL Server 2008. You cannot restore or attach a database which is of a higher version. Here are some options that can be helpful:

Monday 3 March 2014

Measurements

1 byte=8 bits
1 kilobyte(kb)=1024 bytes
1 megabyte(mb)=1024 kilobytes or ~1,000,000 bytes
1 gigabyte(gb)=1024 megabytes or~1,000,000,000 bytes
1 TERABYTE (TB)= 1024 gigabutes or ~1,000,000,000,000 bytes
1 petabyte (pb)=1024 terabytes or~1,000,000,000,000,000 bytes 

Backup and Restore Method in SQL Server

Backup and Restore Method in SQL Server Demo:

Before going to see demo kindly read and know the concepts...............

Backup:
We have to backup the source db in the format of .bak file in SQL(contains beginning to now data)

Restore:
Restore the db from backup file with option. 


When you issue a RESTORE DATABASE or RESTORE LOG command the WITH RECOVERY option is used by default.  This option does not need to be specified for this action to take place.

If you restore a "Full" backup the default setting it to RESTORE WITH RECOVERY, so after the database has been restored it can then be used by your end users.

If you are restoring a database using multiple backup files, you would use the WITH NORECOVERY option for each restore except the last.

T-SQL
Restore full backup WITH RECOVERYAs mentioned above this option is the default, but you can specify as follows.
RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'
WITH RECOVERY
GO
Recover a database that is in the "restoring" stateThe following command will take a database that is in the "restoring" state and make it available for end users.
RESTORE DATABASE AdventureWorks WITH RECOVERY
GO
Restore multiple backups using WITH RECOVERY for last backupThe first restore uses the NORECOVERY option so additional restores can be done.  The second command restores the transaction log and then brings the database online for end user use.
RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'
WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
WITH RECOVERY
GO

SQL Server Management Studio
When restoring using SSMS the WITH RECOVERY option is used by default, so there is nothing that needs to be set but this can be set or changed on the options page when restoring.


How to restore a SQL Server transaction log backup 

(RESTORE LOG)

Overview

The RESTORE LOG command allows you to restore a transaction log backup.  The options include restoring the entire transaction log or to a certain point in time or to a certain transaction mark. 

Explanation

When restoring a transaction log you will need exclusive access to the database, which means no other user connections can be using the database.  If the database is in a restoring state this is not an issue, because no one can be using the database.
The RESTORE LOG option can be done using either T-SQL or using SQL Server Management Studio.

T-SQL
Restore a transaction log backupTo restore a transaction log backup the database need to be in a restoring state.  This means that you would have to restore a full backup and possibly a differential backup as well.
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN'
GO
Restore multiple transaction log files (NORECOVERY)The NORECOVERY option leaves the database in a restoring state after the restore has completed. This allows you to restore additional files to get the database more current. By default this option is turned off. As was mentioned above the database needs to be in a restoring state, so this would have already been done for at least one backup file that was restored.
This shows restoring two transaction log backups, the first using NORECOVERY and the second statement does not which means the database will be accessible after the restore completes.
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks_1.TRN' WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks_2.TRN'
GO
Restore a differential backupTo restore a differential backup, the options are exactly the same. The first thing that has to happen is to do a full restore using the NORECOVERY option. Then the differential can be restored.
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK' WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.DIF'
GO
Restore multiple transaction log backups from the same backup fileLet's say we use the same backup file, AdventureWorks.TRN, to write all of our transaction log backups.  This is not a best practice, because if the file is corrupt then this could corrupt all of your backups in this file.  We can use RESTORE HEADERONLY to see the backups and the positions in the backup file. Let's say that the restore headeronly tells us that we have 3 transaction log backups in this file and we want to restore all  three.  The restore commands 
would be.

RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN' WITH NORECOVERY, FILE = 1
GO RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN' WITH NORECOVERY, FILE = 2 GO RESTORE LOG AdventureWorks FROM DISK = 'C:\AdventureWorks.TRN' WITH FILE = 3


Demo:

Source and Destination db see below


Source DB details see below


Backup source db as see below


After backed up choose the restore select on destionation db as see below






After choose .bak file and click option button u can see one conflict Restoring db is RM But in Restore file as shows c:/programfiles............/mg.mdf.................So if u choose ok it will says mdf not found or db is in use error message



So we have to choose the destination db correctly from ... ellips button on Restore file 
as see below



For this purpose of choosing file is irritating so u can use 
sp_helpdb 'dbname'
 it will give path of the db. so u can copy and past from choose easily.

After choose and click overwrite and click ok to restore db as see below


We are choosed RESTORE WITH NORECOVERY That means still db is restoring or we can add more files to restoring. 
We can see db status as see below
RM..(Restoring)...............



If after the backup any transaction was happened in source db that data also we want in the destination db so that purpose we are creating one sample table in source db as see below


So backp log of tranaction log to save the transaction as see below


So restore that Transaction log on destination db as see below




If you applied above without norecovery of transaction log means db will ready use with end user
as see below.

After restore db 
Source and Destination db's are looking same as see below with later transaction of after backup.



Thankx For seeing my demo....................

Now You can fix orphan user as see my blog or kindly use this script as see below

Script 1:

USE MASTER
GO 
SELECT name as SQLServerLogIn,SID as SQLServerSID FROM sys.syslogins
WHERE [name] = 'tester'
GO

USE RM
GO 
SELECT name DataBaseID,SID as DatabaseSID FROM sysusers
WHERE [name] = 'tester'

USE MASTER
GO 
SELECT name as SQLServerLogIn,SID as SQLServerSID FROM sys.syslogins
WHERE [name] = 'reader'
GO

USE RM
GO 
SELECT name DataBaseID,SID as DatabaseSID FROM sysusers
WHERE [name] = 'reader'


Script 2:


USE rm

sp_change_users_login 'report'

sp_change_users_login @Action='update_one', @UserNamePattern='tester', @LoginName='tester'

sp_change_users_login @Action='update_one', @UserNamePattern='reader', @LoginName='reader'

sp_addlogin 'tester'

EXEC sp_change_users_login 'Auto_Fix', 'tester', NULL, '123456'



Sunday 2 March 2014

Difference between Sql server 2005 and 2008


Sr No
SQL Server 2005
SQL Server 2008
1
XML datatype is introduced.
XML datatype is used.
2
Can not encrypt the entire database.
Can encrypt the entire database introduced in 2008.
3
Datetime is used for both date and time.
Date and time are seperately used for date and time
4
No table datatype is included.
Table datatype introduced.
5
SSIS is started using.
SSIS avails in this version.
6
CMS is not available.
Central Management Server(CMS) is Introduced.
7
PBM is not available
Policy based management(PBM) server is Introduced


What is new in SQL Server 2008:
 Ref:
http://technet.microsoft.com/en-us/magazine/2008.04.overview.aspx?pr=blog
AT A GLANCE:
  • More powerful management
  • Improved performance and scalability
  • Better security and availability
  • Changes for developers

Once again, Microsoft has released a new version of SQL Server that promises to improve life for database administrators. The latest version is SQL Server 2008, and it offers a large variety of new
features and functionality that will make the administration a whole lot better.
Database administrators who have been using SQL Server® 2005 for any length of time will find many of the same features that they use in their day-to-day jobs, but they'll also soon discover that these familiar tools have been enhanced in SQL Server 2008. The new functionality built on the existing features goes a long way in reducing the number of workarounds or customizations once needed for using various features in complex database environments.
New features in SQL Server 2008 involve a large range of database job roles, making it difficult to categorize them. As I classify features under different topics, I realize that some readers will be wondering why I placed Feature X under Category Y when it clearly belongs under Category Z. It's a matter of perspective, and it's affected by how your company does business.
I do understand that database people often find themselves doing many different job roles, but I will try to break down the new features so they fit into the following generic categories: Management, Scalability, Performance, High Availability, Security, Development, and Business Intelligence.

What's New for Management?
For database administrators (like me), the additional management functionality makes SQL Server 2008 a very exciting new product. The new policy management, multiple server query capability, configuration servers, and data collector/management warehouse offer powerful new abilities for database administrators who are often responsible for managing large and complex database environments with hundreds or thousands of databases on dozens or even hundreds of servers.
The SQL Server 2008 Policy Management feature, which was actually called Declarative Management Framework in the Community Technology Previews (CTPs), allows you to create and execute configuration policies against one or more database servers. With these policies, you can ensure that standard configuration settings are applied and maintained on each of the targeted servers and databases. You can see an example of this feature in Figure 1.
Figure 1 The Data and Log File Location Best Practice policy (Click the image for a larger view)
Policies are created from a predefined set of facets. Each facet contains a subgroup of SQL Server 2008 configuration settings and other events that you can control. You pair these facets with conditions in order to create a policy. Conditions are the values that are allowed for the properties of a facet, the configuration settings, or other events contained within that facet.
Conditions are also values used for policy filters. Say you want the policy to be executed only against a certain database. In this case, you can create a condition that contains the name of the database and then add this condition to the policy. Now the policy will only apply to that one database. Trust me on this—SQL Server 2008 Policy Management may sound complex, but once you try it you'll realize it's pretty intuitive.
The new Multiple Server Interaction and Configuration Servers capabilities come in handy when you need to execute queries against multiple servers at the same time. You can register servers in your Management Studio and then place those servers together under a grouping. When you need to execute a policy or query against all the servers in the grouping, you simply right-click on the grouping and do so.
As an added benefit, you can configure this feature to return one resultset per server or merge all the resultsets together into one big resultset. You can also specify whether you want the server and database names as part of the results so you can separate the individual results from each server. Being able to store the registered servers on the configuration server rather than in each individual's Management Studio is a big benefit.
Another nice new management feature is the Data Collector. Database administrators often need to collect management data from a large number of servers, and many of these DBAs have created their own custom solution for doing so. The Data Collector is a built-in mechanism that eases the task of collecting management-related data. It allows you to use the SQL Server Agent and SQL Server Integration Services (SSIS) to create a framework that collects and stores your data while providing error handling, auditing, and collection history.
Unlike third-party tools and custom jobs, the Data Collector will be easily understood by most database administrators since it uses SQL Server Agent and SSIS to create a set of jobs and packages to handle the connections, collection, and storage of data (as you can see in Figure 2). Once this data is stored in a central location, referred to as the Management Warehouse, it can be viewed and organized through a set of T-SQL statements and SQL Server 2008 Reporting Services reports. This central data store makes it much easier to analyze and view the overall management metrics of the database environment.
Figure 2 The Data Collector Disk Usage Log file (Click the image for a larger view)

What's New for Scalability?
Over the years, SQL Server database administrators have found their database environments becoming increasingly large. As the size of your database environment increases, you need new methods and tools to achieve the scalability that most enterprises require. SQL Server 2008 has introduced several new features that will help.
SQL Server 2008 has built-in compression that allows you to compress the database files and the transaction log files associated with the compressed database. SQL Server 2005 introduced the ability to compress data on a read-only file or filegroup, but this form of compression simply used the compression ability of Windows® NTFS. With SQL Server 2008, you now get both row-level and page-level compression, offering benefits you don't get with compression at the data file level.
Compression at the row and page levels reduces the amount of data space needed, plus it reduces the amount of memory needed since the data remains compressed while in memory. Compressed data in memory results in increased memory utilization, which benefits the scalability of many systems.
SQL Server 2008 also introduces compression at the backup level. While database backups only back up the active portion of the database, this still represents as many as hundreds of gigabytes or even dozens of terabytes. In database environments that have more than one copy of a multi-terabyte backup file, these backups often take up valuable storage space that could be used more effectively. By allowing database administrators to compress their backup files, SQL Server 2008 frees up some of this space, so it can be used for live data.
Then there's the Resource Governor. This new feature lets you define the amounts of resources that individual or groupings of workloads are allowed to use during execution. With Resource Governor, you can create an environment in which many different workloads coexist on one server without the fear of one or more of those workloads overwhelming the server and reducing the performance of the other workloads.
The benefit of this feature is that you can more effectively use the total amount of resources that are available on your database servers. Figure 3 shows an example of using the Resource Governor to limit activity on a server.
USE master
go

--Drop function
IF OBJECT_ID('rgclassifier_demo','Function') IS NOT NULL
DROP FUNCTION rgclassifier_demo
go

--Create a classifier function for report group
CREATE FUNCTION rgclassifier_demo() RETURNS SYSNAME 
WITH SCHEMABINDING
AS
BEGIN
    DECLARE @group_name AS SYSNAME
  IF (USER_NAME() LIKE '%Launch_Demo%')
         SET @group_name = 'demogroup'
    RETURN @group_name
END
GO

--Drop workload group for anything coming from Management Studio
IF EXISTS (SELECT name FROM sys.resource_governor_workload_groups 
  WHERE name = 'demogroup')
BEGIN
  DROP WORKLOAD GROUP demogroup
END
GO

--Create workload group
CREATE WORKLOAD GROUP demogroup
GO

--Register the classifier function with 
--Resource Governor
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.rgclassifier_demo)
GO

--Alter the dbogroup workload group to only 
--allow 10% of CPU for each workload request 
ALTER WORKLOAD GROUP demogroup
WITH (REQUEST_MAX_CPU_TIME_SEC = 10)
GO

--Create a new resource pool and set a maximum CPU limit for all workloads.
IF EXISTS (SELECT name FROM sys.resource_governor_resource_pools
  WHERE name = 'pooldemo')
DROP RESOURCE POOL pooldemo
GO
  
CREATE RESOURCE POOL pooldemo
WITH (MAX_CPU_PERCENT = 40)
GO

--Configure the workload group so it uses the 
--new resource pool. 
ALTER WORKLOAD GROUP demogroup
USING pooldemo
GO

--Apply the changes to the Resource Governor
--in-memory configuration.
ALTER RESOURCE GOVERNOR RECONFIGURE
GO


What's New for Performance?
The general performance of databases improves with SQL Server 2008. Thanks to several new features found in SQL Server 2008, you can control and monitor the performance of your databases and the applications that execute against them.
When you have large numbers of transactions performed every second, the locking that normally occurs during these transactions can have a negative impact on the performance of your database applications. SQL Server is designed to reduce the total number of locks a process holds by escalating locks from the smaller row-level and page-level locks to large table-level locks. But it's important to understand that this escalation of locks can cause problems. For example, a single transaction can lock an entire table and prevent other transactions from working with that table.
SQL Server 2008 works with the table partitioning mechanism (which was introduced in SQL Server 2005) to allow the SQL Server engine to escalate locks to the partition level before the table level. This intermediary level of locking can dramatically reduce the effects of lock escalation on systems that have to process hundreds and thousands of transactions per second.
SQL Server 2008 offers several new query processor improvements for when the query interacts with partitioned tables. The query optimizer can now perform query seeks against partitions as it would against individual indexes by only working with the partition ID and not the partitioning mechanism at the table level.

What's New for High Availability?
As database environments become more complex and databases grow in size, the ability to ensure the availability of those databases becomes increasingly difficult. The familiar mechanisms you have used in the past to achieve high availability are still present in SQL Server 2008. But some of these features have been enhanced in SQL Server 2008 and some new ones have been added.
With SQL Server 2005, many administrators started implementing database mirroring to achieve high availability. SQL Server 2008 offers many improvements for the practice of database mirroring. For instance, in the past, database mirroring occasionally had performance issues related to moving transaction log data from the principal to the mirrored databases. In response, SQL Server 2008 now reduces the amount of information that is moved across the network from the principal's transaction log to the mirror's transaction log by compressing the information before sending it to the mirror's transaction log for hardening.
You now have the ability to repair corrupted data pages on the principal. If a principal database suffers corrupt data pages due to errors 823 and 824, the principal can request a fresh copy of those data pages from the mirrored servers. This request of good data pages is an automated process that is transparent to any users who are currently accessing the principal databases.
Another new feature, Hot Add CPU, lets you add additional CPUs to a database server without affecting the availability of the databases residing on that server. However, you should know that Hot Add CPU does have some limitations, as it is only useful when running the 64-bit Itanium-based Windows Server® 2008 Enterprise Edition or Datacenter Edition, and it requires the Enterprise Edition of SQL Server 2008.

What's New for Security?
SQL Server 2005 introduced data security in the form of data encryption. With SQL Server 2008, encryption is greatly enhanced with the introduction of two features: Extensible Key Management and Transparent Data Encryption.
Extensible Key Management allows for an enhanced structure to safely store the keys used in the encryption infrastructure—not only in the database itself but also outside the database in third-party software modules or with a Hardware Security Module.
Transparent Data Encryption offers improved flexibility for encrypting data by allowing encryption to be a property of the database and not just the result of functions in a line of code. The result is that administrators do not have to perform the large number of changes that are required for their database structure and application code when they perform encryption at the data level. The code inFigure 4 shows how you can encrypt a database with Transparent Data Encryption.
USE master;
GO

--Create a master key
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'YouWillNeedToCreateAStrongPassword';
GO

--Create a certificate to use with TDE
CREATE CERTIFICATE TDECERT WITH SUBJECT = 'TDECert'
GO

--Change to the database to encrypt
USE AdventureWorks
GO

--Create your database master key
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_128 --Use a strong algorithm
ENCRYPTION BY SERVER CERTIFICATE TDECERT
GO

--Alter the database to encrypt it with the
--master database key
ALTER DATABASE AdventureWorks
SET ENCRYPTION ON
GO


What's in Store for Developers?
Database administrators are not the only people that will benefit directly from the changes in SQL Server 2008. There are a number of new features that are designed to help database developers. These range from several new T-SQL enhancements to new components that can help developers create and utilize database queries.
Many database developers are responsible for creating the queries that are needed for returning the data required for their applications. You're probably familiar with the LINQ (Language Integrated Query) tool that enables database developers to issue queries against a database using a Microsoft®.NET-based programming language instead of the normal T-SQL statements. Well, SQL Server 2008 enhances LINQ by providing a new LINQ to SQL provider that allows developers to issue LINQ commands directly against SQL Server tables and columns. This will reduce the amount of time it takes to create new data queries.
When developing against databases, developers use higher-level objects that they map to individual database tables and columns. These objects, also known as entities, represent the data needed for database applications and, therefore, the developer doesn't need to understand the actual storage structure of the data and schema of the database. The new ADO.NET Entity Framework now allows developers to create database queries using these entities. The abstracting of the underlying database structure allows developers to be more productive.
SQL Server 2008 offers many different enhancements to T-SQL that allow database developers to be more efficient. One example is the new MERGE statement, which allows the developer to check for the existence of data before trying to insert the data. This check prior to performing the INSERT statement allows the data to be updated. No longer is it necessary to create complex joins in order to update data that exists and to insert data that does not already exist, all during a single statement.
In addition, separating time and date data from the combined date/time data type has been made easier. SQL Server 2008 introduces two separate data types to handle date and time data. Different data types will translate to improved performance for many queries since there will no longer be a need to perform an operation on the data before it can be used in the query.
When creating newer database structures, database developers often find themselves stretching the structure of databases in order to implement mapping applications. SQL Server 2008 helps to address this issue with new spatial data types. The two spatial data types, GEOGRAPHY and GEOMETRY, allow developers to store location-specific data directly into the database without having to break those data elements down into formats that fit other standard data types. The code inFigure 5 is an example of a simple spatial table.
IF OBJECT_ID ( 'Demo_SpatialTable', 'Table' ) IS NOT NULL 
    DROP TABLE Demo_SpatialTable
GO

--Create table to hold spatial data
CREATE TABLE Demo_SpatialTable 
    ( SpatialID int IDENTITY (1,1),
    SpatialInputCol geography, 
    SpatialOutputCol AS SpatialInputCol.STAsText() )
GO

--Insert data into table
INSERT INTO Demo_SpatialTable (SpatialInputCol)
VALUES (geography::STGeomFromText('LINESTRING(47.656 -122.360, 47.656 -122.343)', 4326));

INSERT INTO Demo_SpatialTable (SpatialInputCol)
VALUES (geography::STGeomFromText('POLYGON((47.653 -122.358, 47.649 -122.348, 47.658 -122.348, 47.658 -122.358, 47.653 -122.358))', 4326));
GO

--View data to see that data has been converted and stored in col2
SELECT * FROM Demo_SpatialTable

A very common issue for database developers in the past was how to store and utilize large binary objects such as documents and media files. The method typically used was to store the files outside of the database and just store a pointer in the database to the external file. With this method, however, when you move the file, you must also remember to update the pointer.
SQL Server 2008 handles this issue with the new FILESTREAM data type. With this data type, files can still be stored outside of the database, but the data is considered part of the database for transactional consistency. This allows for the use of common file operations while still maintaining the performance and security benefits of the database.

What about Business Intelligence?
Increased use of SQL Server over the last few years has been driven in large part by the adoption of business intelligence strategies. Business intelligence capabilities are not new to SQL Server, but SQL Server 2008 does bring some new features to the table.
For example, when data is stored in data warehouses, space is often wasted due to NULL values. Columns that store NULL values take up the space of the largest allowed data size defined in the column. This means that a column with thousands of NULL values can actually consume many MB of space without actually storing any data.
SQL Server 2008 introduces sparse columns, which allows NULL values to be stored without taking up any physical space on the disk. Because sparse columns do not consume actual space, tables that contain sparse columns can actually exceed the 1,024 column limit.
SQL Server 2008 also introduces a new mechanism, Change Data Capture, for managing incremental changes that need to be loaded into the data warehouse. This captures and places changed data into a set of change tables. Capturing updated, deleted, and inserted data in an easily consumed storage schema allows for the incremental loading of data warehouses from those tables—as opposed to having to build custom insert statements that try to figure out the changes made to existing rows of data before updating the data warehouse.

Wrapping Up
This is just a quick overview of what SQL Server 2008 has in store. It will bring a broad set of new features and updates to existing features that will improve life for both database administrators and database developers. Ultimately, it will offer much improved performance and scalability for today's ever-demanding databases. For more information, visit SQL Server 2008 Webcasts, Virtual Labs and Podcasts for hands-on SQL Server 2008 resources.