Counter of Festivals

Ashok Blog for SQL Learners and Beginners and Experts

Friday, 17 February 2012

Interview questions in SQL Server 2005/2008/2012 and Asp.net and OOPS


 Interview Questions in SQL Server 2005/2008/2012, Asp.net and OOPS



SQL SERVER 2016

DIFFERENCE BETWEEN  FUNCTION AND STORED PROCEDURE


Sl. No.User Defined functionStored Procedure
1Function must return a value.Stored procedure may or not return values.
2Will allow only Select statement, it will not allow us to use DML statements.Can have select statements as well as DML statements such as insert, update, delete
etc
3It will allow only input parameters, doesn’t support output parameters.It can have both input and output parameters.
4It will not allow us to use try-catch blocks.For exception handling we can use try catch blocks.
5Transactions are not allowed within functions.Can use transactions within Stored procefures.
6We can use only table variables, it will not allow using temporary tables.Can use both table variables aswell as temporary table in it.
7Stored procedures can’t be called from function.Stored Procedures can call functions.
8Functions can be called from select statement.Procedures can’t be called from Select/Where/Having etc statements. Execute/Exec
statement can be used to call/execute stored procedure.
9UDF can be used in join clause as a result set.Procedures can’t be used in Join clause



1. Function is mainly used in the case where it must return a value. Where as a procedure may or may not return a value or may return more than one value using the OUT parameter. Function returns 1 value only. Procedure can return multiple values (max 1024).

2. Function can be called from SQL statements where as procedure can not be called from the sql statements ex: UDF can be executed using the “SELECT” clause while SP’s
can not be.
3. Functions are normally used for computations where as procedures are normally used for executing business logic.
.
4..Stored procedure returns always integer value by default zero. where as function return type could be scalar or table or table values

5. Stored procedure is precompiled execution plan where as functions are not.

6. UDF can not be used in XML FOR clause but SP’s can be used.

7. If there is an error in UDF its stops executing. But in SP’s it just ignores the error and moves to the next statement.
8. UDF can not make permanent chang es to server environments while SP’s can change some of the server enviro nment

Types of UDF(User Defined Function) in SQL Server:

There are three Types of UDFS in Sql Server:
1. Scalar
2. Inline Table-Valued
3. Multi-statement Table-Valued

DIFFERENCE BETWEEN DELETE , TRUNCATE , DROP 
Delete: DML statement, can rollback and specify where clause. Truncate: DDL statement, can't rollback, cannot specify where clause, retains table structure, deletes all rows. Drop: DDL statement, can't rollback, cannot specify where clause, deletes table structure, deletes all rows.

Define Truncate and Delete commands.

TRUNCATE
DELETE
This is also a logged operation but in terms of deallocation of data pages.
This is a logged operation for every row.
Cannot TRUNCATE a table that has foreign key constraints.
Any row not violating a constraint can be Deleted.
Resets identity column to the default starting value.
Does not reset the identity column. Starts where it left from last.
Removes all rows from a table.
Used delete all or selected rows from a table based on WHERE clause.
Cannot be Rolled back.
Need to Commit or Rollback
DDL command
DML command


Difference between primary key and unique key 
First difference is that primary key doesn't accept null values whereas unique accepts one or multiple. second difference is that Clustered index is created on Primary key constraint and non clustered unique indexes is created on Unique key constraint.

 Differences between Primary Key and Unique Key:
Primary Key
1. A primary key cannot allow null values. (You cannot define a primary key on columns that allow nulls.)
2. Each table can have at most one primary key.
3. On some RDBMS A primary key automatically generates a clustered table index by default.

Unique Key
1. A unique key can allow null values. (You can define a unique key on columns that allow nulls.)
2. Each table can have multiple unique keys.
3. On some RDBMS A unique key automatically generates a non-clustered table index by default.


What is the difference between a SEt and a Global temporary table?
  1. SET is the ANSI standard for variable assignment, SELECT is not.
  2. SET can only assign one variable at a time, SELECT can make multiple assignments at once.
  3. If assigning from a query, SET can only assign a scalar value. If the query returns multiple values/rows then SET will raise an error. SELECT will assign one of the values to the variable and hide the fact that multiple values were returned (so you'd likely never know why something was going wrong elsewhere - have fun troubleshooting that one)
  4. When assigning from a query if there is no value returned then SET will assign NULL, where SELECT will not make the assignment at all (so the variable will not be changed from it's previous value)
  5. As far as speed differences - there are no direct differences between SET and SELECT. However SELECT's ability to make multiple assignments in one shot does give it a slight speed advantage over SET.

What is the difference between a Local and a Global temporary table?

Temporary tables are used to allow short term use of data in SQL Server. They are of 2 types:
Local
Global
Only available to the current Db connection for current user and are cleared when connection is closed.
Available to any connection once created. They are cleared when the last connection is closed.
Multiple users can’t share a local temporary table.
Can be shared by multiple user sessions.


What is the difference between a  UNION  and  UNION ALL?

Local
Global
UNION selects only distinct values UNION ALL selects all values and not just distinct ones.

List out the difference between CUBE operator and ROLLUP operator

CUBE
ROLLUP
It’s an additional switch to GROUP BY clause. It can be applied to all aggregation functions to return cross tabular result sets. .
It’s an extension to GROUP BY clause. It’s used to extract statistical and summarized information from result sets. It creates groupings and then applies aggregation functions on them.
Produces all possible combinations of subtotals specified in GROUP BY clause and a Grand Total.
Produces only some possible subtotal combinations.

What are the differences among batches, stored procedures, and triggers?
               
Batch
Stored Procedure
Triggers
Collection or group of SQL statements. All statements of a batch are compiled into one executional unit called execution plan. All statements are then executed statement by statement.
It’s a collection or group of SQL statements that’s compiled once but used many times.
It’s a type of Stored procedure that cannot be called directly. Instead it fires when a row is updated, deleted, or inserted.

State the functions briefly:
1)error(eturns the error number returned by the last executed statement)
2)number of rows affected by the last query
 
1)@@ERROR:
The @@ERROR function returns the number of the last error encountered on the current connection. I
f there are no errors, @@ERROR returns 0. With SQL Server 2000 and previous releases checking @@ERROR was the only way to diagnose and troubleshoot errors. SQL Server 2005 introduces TRY / CATCH syntax for error handling as well as several new functions.

Note that @@ERROR returns the error number returned by the last executed statement, so it's important to catch the error immediately after it occurs. For example, the following query catches the error because it checks for the error immediately after the statement that encountered the error:

SELECT 1 / 0  SELECT 'error number is: ' + CAST(@@ERROR AS VARCHAR)

Results:

-----------  Msg 8134, Level 16, State 1, Line 1  Divide by zero error encountered.      -----------------------------------------------  error number is: 8134

However, the next example does NOT catch the error because it checks the @@ERROR function value too late, after a statement that completes successfully:

SELECT 1 / 0  SELECT 'this is a successfull statement. it resets @@ERROR to zero!'  SELECT 'error number is: ' + CAST(@@ERROR AS VARCHAR)

Results:
-----------  Msg 8134, Level 16, State 1, Line 1  Divide by zero error encountered.      

---------- this is a successfull statement. it resets @@ERROR to zero!      -----------------------------------------------  error number is: 0

2)@@ROWCOUNT:
The @@ROWCOUNT function returns the number of rows affected by the last query. This function can be used effectively to find out whether the number of rows modified is the same as what you intended to modify. If your query was supposed to update 15 rows but @@ROWCOUNT returns 10 then something must have gone wrong. Much like with @@ERROR it's important to get the @@ROWCOUNT value immediately after the statement you want to examine. For example, the following query erroneously reports that the total number of affected rows is one, even though your main query returned 10 rows, as desired:

SELECT TOP 10 * FROM dimCustomer  SELECT 'this is a successfull statement. it resets @@ROWCOUNT to one!'  SELECT 'number of rows affected is: ' + CAST(@@ROWCOUNT AS VARCHAR)

What is the difference between ―Web.config  and  Machine.Config ?
Web.config” files apply settings to each web application, while “Machine.config” file apply settings to all ASP.NET applications.

What is B-Tree?

A B-tree is a method of placing and locating files (called records or keys) in a database. (The meaning of the letter B has not been explicitly defined.) The B-tree algorithmminimizes the number of times a medium must be accessed to locate a desired record, thereby speeding up the process.
B-trees are preferred when decision points, called nodes, are on hard disk rather than in random-access memory (RAM). It takes thousands of times longer to access a data element from hard disk as compared with accessing it from RAM, because a disk drive has mechanical parts, which read and write data far more slowly than purely electronic media. B-trees save time by using nodes with many branches (called children), compared with binary trees, in which each node has only two children. When there are many children per node. A record can be found by passing through fewer nodes than if there are two children per node

What is use of DBCC Commands?
DBCC (Database consistency checker) act as Database console commands for SQL Server to check database consistency. They are grouped as:
Maintenance: Maintenance tasks on Db, filegroup, index etc. Commands include DBCC CLEANTABLE, DBCC INDEXDEFRAG, DBCC DBREINDEX, DBCC SHRINKDATABASE, DBCC DROPCLEANBUFFERS, DBCC SHRINKFILE, DBCC FREEPROCCACHE, and DBCC UPDATEUSAGE.
Miscellaneous: Tasks such as enabling tracing, removing dll from memory. Commands include DBCC dllname, DBCC HELP, DBCC FREESESSIONCACHE, DBCC TRACEOFF, DBCC FREESYSTEMCACHE, and DBCC TRACEON.
Informational: Tasks which gather and display various types of information. Commands include DBCC INPUTBUFFER, DBCC SHOWCONTIG, DBCC OPENTRAN, DBCC SQLPERF, DBCC OUTPUTBUFFER, DBCC TRACESTATUS, DBCC PROCCACHE, DBCC USEROPTIONS, and DBCC SHOW_STATISTICS.
Validation: Operations for validating on Db, index, table etc. Commands include DBCC CHECKALLOC, DBCC CHECKFILEGROUP, DBCC CHECKCATALOG, DBCC CHECKIDENT, DBCC CHECKCONSTRAINTS, DBCC CHECKTABLE, and DBCC CHECKDB.

What are the restrictions applicable while creating views?

Restrictions applicable while creating views:
·         A view cannot be indexed.
·         A view cannot be Altered or renamed. Its columns cannot be renamed.
·         To alter a view, it must be dropped and re-created.
·         ANSI_NULLS and QUOTED_IDENTIFIER options should be turned on to create a view.
·         All tables referenced in a view must be part of the same database.
·         Any user defined functions referenced in a view must be created with SCHEMABINDING option.
·         Cannot use ROWSET, UNION, TOP, ORDER BY, DISTINCT, COUNT(*), COMPUTE, COMPUTE BY in views.

What are the capabilities of Cursors?

Capabilities of cursors:
·         Cursor reads every row one by one.
·         Cursors can be used to update a set of rows or a single specific row in a resultset
·         Cursors can be positioned to specific rows.
·         Cursors can be parameterized and hence are flexible.
·         Cursors lock row(s) while updating them.
1  What are the advantages of using Stored Procedures?
  •  They are easier to maintain and troubleshoot as they are modular·      
  •    Stored procedures enable better tuning for performance.
  •   Using stored procedures is much easier from a GUI end than building/using complex queries.
  •   They can be part of a separate layer which allows separating the concerns. Hence Database layer  c        can be handled by separate developers proficient in database queries.
  •  Help in reducing network usage.
  •   Provides more scalability to an application.
  •   Reusable and hence reduce code.

     SQL Server 2008 New Features
1.Policy-Based Management 
2.Self Tuning

http://sqltechi.blogspot.in/2015/04/new-features-in-sql-server-200820122014.html

New features and enhancements in SQL Server 2012

http://sqltechi.blogspot.in/2015/04/new-features-in-sql-server-200820122014.html
  •  Hardware and Software Requirements 
  •  Multi-Subnet Failover Clustering 
  •  Programming Enhancements, including sequences, ad-hoc query paging and full-text search tweaks 
  •  BI and Web Development Environment Improvements 
  •  Web-based Visualization
  • Data Quality Services
SQL Injection
  •  SQL injection is a technique used to take advantage of non-validated input vulnerabilities to pass SQL commands through a Web application for execution by a backend database. Attackers take advantage of the fact that programmers often chain together SQL commands with user-provided parameters, and can therefore embed SQL commands inside these parameters. The result is that the attacker can execute arbitrary SQL queries and/or commands on the backend database server through the Web application

What is Log Shipping?
Definition:1
Log shipping defines the process for automatically taking backup of the database and transaction files on a SQL Server and then restoring them on a standby/backup server. This keeps the two SQL Server instances in sync with each other. In case production server fails, users simply need to be pointed to the standby/backup server. Log shipping primarily consists of 3 operations:
Backup transaction logs of the Production server.
Copy these logs on the standby/backup server.
Restore the log on standby/backup server.
Definition:2
Log Shipping is used to synchronize the Distributed Database Server. Synchronize the database by copying Transaction logs, Backing up, Restoring data. SQL Server used SQL Server Job Agents for making those processes automatic. Log Shipping does not involve automatic transfer of server if there is any failure. This means it has just synchronized the databases but if the primary server fails, it will not redirect your application to a secondary server. This has to be done manually. The main functions of Log Shipping are as follows:  Backing up the transaction log of the primary database Copying the transaction log backup to each secondary server  Restoring the transaction log backup on the secondary database

Log Shipping is a basic level SQL Server high-availability technology that is part of SQL Server. It is an automated backup/restore process that allows you to create another copy of your database for failover.
Log shipping involves copying a database backup and subsequent transaction log backups from the primary (source) server and restoring the database and transaction log backups on one or more secondary (Stand By / Destination) servers. The Target Database is in a standby or no-recovery mode on the secondary server(s) which allows subsequent transaction logs to be backed up on the primary and shipped (or copied) to the secondary servers and then applied (restored) there

Replication:
Definition 1:
Replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency. Using replication, you can distribute data to different locations and to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet
Definition 2:
The process of creating and managing duplicate versions of a database. Replication not only copies a database but also synchronizes a set of replicas so that changes made to one replica are reflected in all the others. The beauty of replication is that it enables many users to work with their own local copy of a database but have the database updated as if they were working on a single, centralized database. For database applications where users are geographically widely distributed, replication is often the most efficient method of database access.
Types of Replication
  •   Snapshot   Replication
  •   Transactional  Replication
  •    Merge Replication
The type of replication you choose for an application depends on many factors, including the
 physical replication environment,
 the type
and quantity of data to be replicated,
and whether the data is updated at the Subscriber.
  •  The physical environment includes the number and location of computers involved in replication and whether these computers are clients or servers. 
  • Each type of replication typically begins with an initial synchronization of the published objects between the Publisher and Subscribers. 
  • Publisher(sender(data))<------------------------------------------>  Subscribers (Receiver(data))
  • This initial synchronization can be performed by replication with a snapshot, which is a copy of all of the objects and data specified by a publication. After the snapshot is created, it is delivered to the Subscribers
  • For some applications, snapshot replication is all that is required. For other types of applications, it is important that subsequent data changes flow to the Subscriber incrementally over time. Some applications also require that changes flow from the Subscriber back to the Publisher. 
  • Transactional replication and merge replication provide options for these types of applications. Data changes are not tracked for snapshot replication. Each time a snapshot is applied, it completely overwrites the existing data.
  • Transactional replication tracks changes through the SQL Server transaction log
  • Merge replication tracks changes through triggers and metadata tables.
  •  Snapshot replication acts in the manner its name implies. The publisher simply takes a snapshot of the entire replicated database and shares it with the subscribers. Of course, this is a very time and resource-intensive process. For this reason, most administrators don’t use snapshot replication on a recurring basis for databases that change frequently. 
There are two scenarios where snapshot replication is commonly used.
First, it is used for databases that rarely change.
Second, it is used to set a baseline to establish replication between systems while future updates are propagated using transactional or merge replication. 
  • Transactional replication offers a more flexible solution for databases that change on a regular basis. With transactional replication, the replication agent monitors the publisher for changes to the database and transmits those changes to the subscribers. This transmission can take place immediately or on a periodic basis. 
  • Merge replication allows the publisher and subscriber to independently make changes to the database. Both entities can work without an active network connection. When they are reconnected, the merge replication agent checks for changes on both sets of data and modifies each database accordingly. If changes conflict with each other, it uses a predefined conflict resolution algorithm to determine the appropriate data. Merge replication is commonly used by laptop users and others who can not be constantly connected to the publisher.

What is Peer to Peer Replication?


With Peer to Peer replication the idea is that any subscriber is also a publisher, so data can move both ways such as bi-directional replication.  With Peer to Peer replication there is a mechanism that knows when a transaction has been replicated, so the transaction does not continue to be replicated over and over again.  The one area that still needs to be handled is if the same data is modified on multiple servers and therefore some type of conflict resolution may need to take place. 
Below is a diagram that shows how Peer to Peer replication works.  This diagram shows two different configurations: on the left side both notes A and B are used for reading and writing and on the right side only node B is used for reading and writing where node A is used only for reading data.  The application server, which could be a web server, determines which node is used for which component. 
Peer-to-peer replication, two nodes
(source SQL Server 2005 Books Online)
This model could be further extended to have a node C, D, etc... and the application server could determine which server is utilized for reading and/or writing.  In most cases database servers are more read intensive then write intensive, so having more read only nodes would allow you to get additional I/O throughput on your database servers. If you also needed to have more nodes handling writes, you could build this into your application servers to determine which writes are done to what servers.
With Peer to Peer replication when data updates take place the data is then replicated to all other Peers, so the offloading of read activities could be directed to any one of the nodes.  From a failover solution, since all nodes act as both publishers and subscribers you can easily point your application server to any of the nodes to act as your primary database.

Database mirroring
Database mirroring can be used to improve availability of certain replication databases. Support for combining transactional replication with database mirroring depends on which replication database is being considered. Peer-to-Peer replication is not supported in combination with database mirroring.


Database Mirroring

It will allow failover of database in the event you lose your main SQL Server.



Before starting this, be sure that you have 3 SQL Servers in different locations for best results. 2 Servers need to have the identical SQL Server instance which means the same version (either Standard or Enterprise) also it's highly recommended that also the service pack and if any cumulative updates are the same on both servers. But for the third server, it can be SQL Server Standard, Enterprise, Workgroup, or Express. Witness Server will be the one pinging the other 2 servers if there's something wrong. This is the server that has the ability to recognize whether to initiate an automatic failover. This will not contain any database, that's why it's nonsense to use a SQL Server other than Express edition.
  1. Verify the following:
    1. You have 3 SQL Servers for Principal, Mirror and Witness
    2. SQL Server is using an Active Directory account. Ideally, use the same account for all SQL Servers.
    3. Primary Database is in Full Recovery model.
  2. Back up the database on the Principal SQL Server.
  3. Create a database with the same name from the Principal SQL Server on the Mirroring SQL Server, then restore the backup on the Mirroring SQL Server with the option to Overwrite the existing database checked and RESTORE WITH NORECOVERY option.

 SQL Server 2005 Backup Types:
1)Full
2)Transaction Log
3) Differential
4) Partial
5)Differential Partial
6) File and Filegroup
7) Copy Only Database Backups

 Full Database Backup A full database backup captures the entire database in one fell swoop. It can be used to restore a database to the time when the backup completed. A full database backup is often used during restore operations as a first step in recovering a database. Other backups are applied after the full backup is restored to bring the database closer in time to the actual moment of failure.
Transaction Log Backup Only available under the full recovery model and the bulk logged recovery model. Transaction log backups allow for point in time recovery since each transaction is captured within the transaction log.
 Differential Database Backup A differential database backup captures changes (and only the changes) made to the database since the last full database backup. This full database backup which differential backups reference is also called the differential base, since the differential backups are recording changes since the last full database backup was performed. The differential database backup records the changes to the database up to the time when the differential backup was completed.
Partial Database Backup Partial backups were introduced for SQL Server 2005. Partial backups will backup the primary filegroup, any read/write file groups, and any read-only files that are specified during the backup. The partial backup is designed to not backup any data that is placed in a read-only file group. So, you can use a partial backup to backup your entire database – except for the read only data. Partial backups are meant to be used in a simple recovery model situation, although they can be used in a full recovery model situation as well.
 Differential Partial Database Backup Differential partial backups use the most recent partial backup as their differential base. They work very similar to a differential backup, except (with some exceptions) they capture only changes made to read/write filegroups . If you add, drop, or change the status of any of your filegroups, you’re best off taking a partial backup prior to resuming differential partial backups. Filegroup changes can have unintended consequences with differential partial backups
 File and Filegroup Backups This primarily applies to very large databases (VLDB) with multiple filegroups that have strict availability requirements. In these instances, sometimes it takes too long to perform a full backup, or the size of a full backup is extremely large. If you add, drop, or change the status of any of your filegroups, you’re best off taking a partial backup prior to resuming differential partial backups. Filegroup changes can have unintended consequences with differential partial backups.
Copy Only Database Backup Copy Only backups in SQL Server 2005 must be performed using T-SQL and are not logged in the backup and restore sequence for the database. In other words, you can’t apply differential backups to this database backup, it is an independent copy separate from the original database. The transaction log is unaffected (will not be truncated) during this type of backup. You might consider making a copy only backup for a special purpose – like database development or testing.

Types of Cursors
SQL Server 2005
Microsoft SQL Server 2005 Compact Edition (SQL Server Compact Edition) cursors are similar to cursors used in SQL Server. The differences are documented in this section. For a complete explanation about database cursors, see SQL Server Books Online. SQL Server Compact Edition supports the following Types of cursors:
  •  Base table
  •  Static
  •  Forward-only
  •  Forward-only/Read-only(Dynamic)
  •  Keyset-driven Base Table Cursors
Base table cursors are the lowest level of cursor available. These cursors work directly against the storage engine and are the fastest of all supported cursor types. Base table cursors can scroll forward or backward with minimal cost, and can be updated. You can also open a cursor directly on an index. Indexes are supported to order the rows in a table, to enable seeking on particular values, and to restrict the rows based on a range of values within an index. Base table cursors have dynamic membership. This means that two cursors opened over the same table can immediately see insertions, deletions, and changes to the data, assuming both are in the same transaction scope. Because you can update base table cursors, a client can use this kind of cursor to make changes to the underlying data. Base table cursors cannot represent the result of a query. Results of queries, such as SELECT * FROM tablename, are not returned through a base table cursor. Instead, one of the supported query result cursors is used. The following is an example of how to obtain a base table cursor by using ADO .NET: //Base Table Cursor cmd.CommandText = "tablename"; cmd.CommandType = CommandType.TableDirect; SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.Scrollable | ResultSetOptions.Updatable); The following is an example of how to obtain an index cursor by using ADO .NET: cmd.CommandText = "tablename"; cmd.IndexName = "indexname"; cmd.CommandType = CommandType.TableDirect;
SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.Scrollable | ResultSetOptions.Updatable);
Static Cursors
A static cursor, referred to as a scrollable query cursor in earlier versions of SQL Server Compact Edition, creates and stores a complete copy of the result set. The exception to this is long-value data that is retrieved when a user explicitly requests it. This result set is filled only as needed. This is different from SQL Server, which populates the result set at cursor creation. Static cursors support scrolling backward and forward, but they do not support updates. Static cursors do not see external changes to the data that is insensitive. Query results are cached for the lifetime of the cursor. Although static cursors are more functional than forward-only cursors, static cursors are slower and use more memory. We recommend that you consider static cursors only if scrolling is required and a keyset cursor is not appropriate. The following is an example of how to obtain a static cursor by using ADO.NET: cmd.CommandText = "Select * from tablename"; SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.Scrollable | ResultSetOptions.Insensitive); Forward-only Cursors
The forward-only cursor is the fastest cursor that you can update, but it does not support scrolling. It supports only fetching the rows serially from the start to the end of the cursor. The rows are not retrieved from the database until they are fetched. The effects of all INSERT, UPDATE, and DELETE statements made by the current user or committed by other users that affect rows in the result set are visible as the rows are fetched from the cursor. Because the cursor cannot be scrolled backward, changes made to rows in the database after the row was fetched are not visible by using the cursor. Forward-only and forward-only/read-only cursors are the fastest query-based cursors. They should be used in scenarios in which speed and memory footprint are most important. The following is an example of how to obtain a forward-only cursor by using ADO .NET: cmd.CommandText = "Select * from tablename"; SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable);
 Forward-only/Read-only Cursors
Forward-only/read-only cursors, referred to as forward-only cursors in earlier versions of SQL Server Compact Edition, are the fastest cursors, but cannot be updated. The following is an example of how to obtain a forward-only/read-only cursor by using ADO.NET: cmd.CommandText = "Select * from tablename";
SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.None);
Note You cannot create read-only cursors on query that returns only read only columns because internally all SQL Server Compact Edition cursors are updatable. SQL Server Compact Edition cannot update read-only columns returned in the SqlCeResultSet. Therefore, it would fail with the error "Cannot generate an updatable cursor for the query because there is no updatable column.
 Keyset-driven Cursors
The keyset-driven cursor in SQL Server Compact Edition is a scrollable cursor that you can update. A keyset-driven cursor is controlled by a set of physical identifiers known as the keyset. The keyset is based on all the rows that qualified for the SELECT statement at the time the cursor was opened. The keyset is built in a temporary table when the cursor is opened. With a keyset-driven cursor, membership is determined at the time that the query is executed. Keyset-driven cursors in SQL Server Compact Edition differ slightly from those in SQL Server. In SQL Server, the keyset-driven cursor uses a set of unique identifiers as the keys in the keyset. In SQL Server Compact Edition, the keys are bookmarks for where values are logically stored in a table. They are not unique identifiers. Although sensitive to a number of changes, a keyset-driven cursor is not as sensitive as other cursors. For example, an insert outside the cursor will not be seen, although inserts inside the cursor will be seen at the end. In this case, we recommend that you close and reopen the cursor, or use one of the forward-only cursors. Because SQL Server Compact Edition uses bookmarks to define a keyset, all changes to data values for rows that are included in the keyset are visible by using the cursor. This is the case for both changes that are made within the cursors and changes that are made outside the cursor. Any deletes in a keyset cursor, whether within or outside the cursor, will cause the row to be reported as deleted if an attempt is made to fetch it. The following is an example of how to obtain a keyset-driven cursor by using ADO .NET: cmd.CommandText = "Select * from tablename"; SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.Scrollable | ResultSetOptions.Updatable);

Triggers in SQL SERVER
A trigger is a special kind of stored procedure that is invoked whenever an attempt is made to modify the data in the table it protects. Modifications to the table are made ussing INSERT,UPDATE,OR DELETE statements.Triggers are used to enforce data integrity and business rules such as automatically updating summary data. It allows to perform cascading delete or update operations. If constraints exist on the trigger table,they are checked prior to the trigger execution. If constraints are violated statement will not be executed and trigger will not run.Triggers are associated with tables and they are automatic . Triggers are automatically invoked by SQL SERVER. Triggers prevent incorrect , unauthorized,or inconsistent changes to data.
Creation of Triggers
Triggers are created with the CREATE TRIGGER statement. This statement specifies that the on which table trigger is defined and on which events trigger will be invoked. To drop Trigger one can use DROP TRIGGER statement CREATE TRIGGER [owner.]trigger_name ON[owner.] table_name FOR[INSERT/UPDATE/DELETE] AS IF UPDATE(column_name) [{AND/OR} UPDATE(COLUMN_NAME)...] { sql_statements }
Trigger rules and guidelines
A table can have only three triggers action per table : UPDATE ,INSERT,DELETE. Only table owners can create and drop triggers for the table.This permission cannot be transferred.A trigger cannot be created on a view or a temporary table but triggers can reference them. A trigger should not include SELECT statements that return results to the user, because the returned results would have to be written into every application in which modifications to the trigger table are allowed. They can be used to help ensure the relational integrity of database.On dropping a table all triggers associated to the triggers are automatically dropped . The system stored procedure sp_depends can be used to find out which tables have trigger on them. Following sql statements are not allowed in a trigger they are ALL CREATE statements,ALL DROP statements,ALTER TABLE AND ALTER DATABASE,TRUNCATE TABLE,GRANT AND REVOKE,UPDATE STATISTICS,RECONFIGURE,LOAD DATABASE AND LOAD TRANSACTION, ALL DISK statements,SELECT INTO
INSERT trigger
When an INSERT trigger statement is executed ,new rows are added to the trigger table and to the inserted table at the same time. The inserted table is a logical table that holds a copy of rows that have been inserted. The inserted table can be examined by the trigger ,to determine whether or how the trigger action are carried out. The inserted table allows to compare the INSERTED rows in the table to the rows in the inserted table.The inserted table are always duplicates of one or more rows in the trigger table.With the inserted table ,inserted data can be referenced without having to store the information to the variables.
DELETE trigger
When a DELETE trigger statement is executed ,rows are deleted from the table and are placed in a special table called deleted table.
UPDATE trigger
When an UPDATE statement is executed on a table that has an UPDATE trigger,the original rows are moved into deleted table,While the update row is inserted into inserted table and the table is being updated. IF UPDATE<column_name> CREATE TRIGGER trigger1 ON member FOR UPDATE AS IF UPDATE(last_name) BEGIN RAISEERROR('Transaction cannot be processed') ROLLBACK TRANSACTION END
Multi-row trigger
A multi-row insert can occur from an INSERT with a SELECT statement.Multirow considerations can also apply to multi-row updates and multi-row deletes. Code: SQL CREATE TRIGGER adult_insert ON adult FOR INSERT AS DECLARE @rcnt int SELECT @rcnt = @@rowcount IF(SELECT COUNT(*) FROM member,inserted WHERE member.mem_num = inserted.mem_num)=0 BEGIN PRINT 'Transaction not processed' PRINT 'No entry for this member' ROLLBACK TRANSACTION END IF(SELECT COUNT(*) FROM member,inserted WHERE member.mem_num = inserted.mem_num)<> @rcnt BEGIN PRINT 'Not all adults have an entry in the member table' PRINT 'Multi-row insert transaction has been rolled backed' ROLLBACK TRANSACTION END
What is a Trigger
A trigger is a special kind of a store procedure that executes in response to certain action on the table like insertion, deletion or updation of data. It is a database object which is bound to a table and is executed automatically. You can’t explicitly invoke triggers. The only way to do this is by performing the required action no the table that they are assigned to.
  Types Of Triggers
There are three action query types that you use in SQL which are INSERT, UPDATE and DELETE. So, there are three types of triggers and hybrids that come from mixing and matching the events and timings that fire them. Basically,
triggers are classified into two main types:-
 (i) After Triggers (For Triggers) (ii) Instead Of Triggers

 (i) After Triggers
These triggers run after an insert, update or delete on a table. They are not supported for views. AFTER TRIGGERS can be classified further into three types as:
(a) AFTER INSERT Trigger. 
(b) AFTER UPDATE Trigger. 
(c) AFTER DELETE Trigger.
 (ii) Instead Of Triggers
These can be used as an interceptor for anything that anyonr tried to do on our table or view. If you define an Instead Of trigger on a table for the Delete operation, they try to delete rows, and they will not actually get deleted (unless you issue another delete instruction from within the trigger) INSTEAD OF TRIGGERS can be classified further into three types as:-
(a) INSTEAD OF INSERT Trigger. 
(b) INSTEAD OF UPDATE Trigger. 
(c) INSTEAD OF DELETE Trigger.
 Accidentally i deleted my table. How can i get that table? 1.Take Tail Log backup 2.Restore Full backup 3.Restore Diff (If u have)or Log backup in sequence order 4.Restore Tail Log backup how to improve the speed of SQL database server and how to avoid dead lock? 1.check th disc space u have to use
2.check the profile client have any job,and query problem 3.use update_satistics to update indexes 4.set indexes in tables and constraints Is it possible to create a stored procedure that runs a query and outputs the results to a text file and allows me to add extra delimeters and static field info. If so How? SQL Server has no native command for outputting query results to a file. You can use the extended stored procedure xp_cmdshell and call isql (command-line SQL) with your query and output the results to a file. Any delimiters would need to be part of the SELECT string: DECLARE @isqlString varchar(255) SELECT @isqlString = 'isql -Q "SELECT DateCol FROM NorthwindTest.dbo.Test" -E -o C:\Results.txt' EXEC master..xp_cmdshell @isqlString what is the basic difference between clustered and a non-clustered index? The difference is that, Clustered index is unique for any given table and we can have only one clustered index on a table. The leaf level of a clustered index is the actual data and the data is resorted in case of clustered index. Whereas in case of non-clustered index the leaf level is actually a pointer to the data in rows so we can have as many non-clustered indexes as we can on the db.

Types of User Defined Function in sql server 2005
In SQL databases, a user-defined function provides a mechanism for extending the functionality of the database server by adding a function that can be evaluated in SQL statements. The SQL standard distinguishes between scalar and table functions. A scalar function returns only a single value (or NULL), whereas a table function returns a (relational) table comprising zero or more rows, each row with one or more columns. There are three types of UDFs: -
Scalar - Inline - Multi-statement or Table-Valued
Scalar Functions
Scalar functions return a data type such as int, money, varchar, real, etc. They can be used anywhere a built-in SQL function is allowed. The syntax for a scalar function is the following: CREATE FUNCTION dbo.Factorial ( @iNumber int ) RETURNS INT AS BEGIN DECLARE @i int IF @iNumber <= 1 SET @i = 1 ELSE SET @i = @iNumber * dbo.Factorial( @iNumber - 1 ) RETURN (@i) END II. In-Line Table Functions In-line table functions are functions that return the output of a single SELECT statement as a table data type. Since this type of function returns a table, the output can be used in joins of queries as if it was a standard table. The syntax for an in-line table function is as follows: II. In-Line Table Functions In-line table functions are functions that return the output of a single SELECT statement as a table data type. Since this type of function returns a table, the output can be used in joins of queries as if it was a standard table. The syntax for an in-line table function is as follows: CREATE FUNCTION dbo.AuthorsForState(@cState char(2) ) RETURNS TABLE AS RETURN (SELECT * FROM Authors WHERE state = @cState) III. Multistatement Table Functions Multistatement table functions are similar to stored procedures except that they return a table. This type of function is suited to address situations where more logic is required than can be expressed in a single query.
Examples of user defined functions:
Following User Defined Function (UDF) returns the numbers of days in month. It is very simple yet very powerful and full proof UDF. CREATE FUNCTION [dbo].[udf_GetNumDaysInMonth] ( @myDateTime DATETIME ) RETURNS INT AS BEGIN DECLARE @rtDate INT SET @rtDate = CASE WHEN MONTH(@myDateTime) IN (1, 3, 5, 7, 8, 10, 12) THEN 31 WHEN MONTH(@myDateTime) IN (4, 6, 9, 11) THEN 30 ELSE CASE WHEN (YEAR(@myDateTime) % 4 = 0 AND YEAR(@myDateTime) % 100 != 0) OR (YEAR(@myDateTime) % 400 = 0) THEN 29 ELSE 28 END END RETURN @rtDate
END GO Run following script in Query Editor: SELECT dbo.udf_GetNumDaysInMonth(GETDATE()) NumDaysInMonth GO ResultSet: NumDaysInMonth ———————– 31

SQL SERVER – ACID (Atomicity, Consistency, Isolation, Durability)
ACID (an acronymn for Atomicity Consistency Isolation Durability) is a concept that Database Professionals generally look for when evaluating databases and application architectures. For a reliable database all this four attributes should be achieved.
Atomicity is an all-or-none proposition. {i.e) transaction completed or not completed status only
Consistency guarantees that a transaction never leaves your database in a half-finished state.
 Isolation keeps transactions separated from each other until they‟re finished.
Durability guarantees that the database will keep track of pending changes in such a way that the server can recover from an a bnormal termination.

what's the difference between VARCHAR and NVARCHAR how important is the difference? VARCHAR is an abbreviation for variable-length character string. It's a string of text characters that can be as large as the page size for the database table holding the column in question. The size for a table page is 8,196 bytes, and no one row in a table can be more than 8,060 characters. This in turn limits the maximum size of a VARCHAR to 8,000 bytes

Development life cycles :
SDLC (System Development Life Cycle) is overall process of developing information systems through multi stage process systems from investigation of initial requirements through analysis, design, implementation and maintenance. The days are gone when one COBOL programmer used to analyze, test and implement software systems. Systems have become complex, huge team members are involved, architects, analyst, programmers, testers, users etc. To manage this number of SDLC models have been created. Following are popular models which are listed:-
 √ Waterfall Model.
 √ Spiral Model. 
√ Build and Fix model.
 √ Rapid prototyping Model. 
√ Incremental Model. 
This section we will go into depth of different SDLC models. Water Fall Model This is the oldest model. It has sequence of stages; output of one stage becomes input of other. Following are stages in Waterfall model:-
√ System Requirement: - This is initial stage of the project where end user requirements are gathered and documented.
√ System Design: - In this stage detail requirements, screen layout, business rules, process diagram, pseudo code and other documentations are prepared. This is first
step in technical phase.
√ Implementation: - Depending on the design document actual code is
written here.
√ Integration and Testing: - All pieces are brought together and tested. Bugs are
removed in this phase.

What is the purpose of Replication ? Replication is way of keeping data synchronized
in multiple databases. SQL server replication has two important aspects publisher and subscriber.
Publisher Database server that makes data available for replication is called as Publisher
Subscriber Database Servers that get data from the publishers is called as Subscribers
Can you explain project life cycle ?
1.Requirement 
2.Analysis
3.Design
4.Coding
5.Testing
6.Implementation

How can you raise custom errors from stored procedure ?
The RAISERROR statement is used to produce an ad hoc error message or to retrieve a custom message that is stored in the sysmessages table. You can use this statement with the error handling code presented in the previous section to implement custom error messages in your applications. The syntax of the statement is shown here.
RAISERROR ({msg_id |msg_str }{,severity ,state } [ ,argument [ ,,...n ] ] )) [ WITH option [ ,,...n ] ]
A description of the components of the statement follows. msg_id :-The ID for an error message, which is stored in the error column in sysmessages.318 msg_str :-A custom message that is not contained in sysmessages. severity :- The severity level associated with the error. The valid values are 0–25. Severity levels 0–18 can be used by any user, but 19–25 are only available to members of the fixed-server role sysadmin. When levels 19–25 are used, the WITH LOG option is required. state A value that indicates the invocation state of the error. The valid values are 0–127. This value is not used by SQL Server. Argument, . . . One or more variables that are used to customize the message. For example, you could pass the current process ID (@@SPID) so it could be displayed in the message. WITH option, . . . The three values that can be used with this optional argument are described here. LOG - Forces the error to log in the SQL Server error log and the NT application log. NOWAIT - Sends the message immediately to the client. SETERROR - Sets @@ERROR to the unique ID for the message or 50,000. The number of options available for the statement makes it seem complicated, but it is actually easy to use. The following shows how to create an ad hoc message with a severity of 10 and a state of 1. RAISERROR ('An error occurred updating the NonFatal table',10,1) --Results-- An error occured updating the NonFatal table The statement does not have to be used in conjunction with any other code, but for our purposes it will be used with the error handling code presented earlier. The following alters the ps_NonFatal_INSERT procedure to use RAISERROR. USE tempdb go ALTER PROCEDURE ps_NonFatal_INSERT @Column2 int =NULL AS319 DECLARE @ErrorMsgID int INSERT NonFatal VALUES (@Column2) SET @ErrorMsgID =@@ERROR IF @ErrorMsgID <>0 BEGIN RAISERROR ('An error occured updating the NonFatal table',10,1) END When an error-producing call is made to the procedure, the custom message is passed to the client. The following shows the output generated by Query Analyzer.
What is the difference between a HAVING CLAUSE and a WHERE CLAUSE? You can use Having Clause with the GROUP BY function in a query and WHERE Clause is applied to each row before they are part of the GROUP BY function in a query.
What is the difference between UNION and UNION ALL SQL syntax ? UNION SQL syntax is used to select information from two tables. But it selects only distinct records from both the table, while UNION ALL selects all records from both the tables.
What are the different ways of moving data/ databases between servers and databases in SQL Server?
There are lots of option available; you have to choose your option depending upon your requirements. Some of the options you have are: BACKUP/RESTORE, detaching and attaching
databases, replication, DTS, BCP, logshipping, INSERT...SELECT, SELECT...INTO, creating INSERT scripts to generate data.
What are the different locks in SQL SERVER ?
Depending on the transaction level six types of lock can be acquired on data :-
 Intent
The intent lock shows the future intention of SQL Server's lock manager to acquire locks on a specific unit of data for a particular transaction. SQL Server uses intent locks to queue exclusive locks, thereby ensuring that these locks will be placed on the data elements in the order the transactions were initiated.
Intent locks come in three flavors:
a)intent shared (IS)
b)intent exclusive (IX)
c)shared with intent exclusive (SIX). 
IS locks indicate that the transaction will read some (but not all) resources in the table or page by placing shared locks.
IX locks indicate that the transaction will modify some (but not all) resources in the table or page by placing exclusive locks.
SIX locks indicates that the transaction will read all resources, and modify some(but not all) of them. This will be accomplished by placing the shared locks on the resources read and exclusive locks on the rows modified. Only one SIX lock is allowed per resource at one time; therefore, SIX locks prevent other connections from modifying any data in the resource (page or table), although they do allow reading the data in the same resource.
 Shared Shared locks (S) allow transactions to read data with SELECT statements. Other connections are allowed to read the data at the same time; however, no transactions are allowed to modify data until the shared locks are released.
 Update Update locks (U) are acquired just prior to modifying the data. If a transaction modifies a row, then the update lock is escalated to an exclusive lock; otherwise, it is converted to a shared lock. Only one transaction can acquire update locks to a resource at one time. Using update locks prevents multiple connections from having a shared lock that want to eventually modify a resource using an exclusive lock. Shared locks are compatible with other shared locks, but are not compatible with Update locks. Exclusive Exclusive locks (X) completely lock the resource from any type of access including reads. They are issued when data is being modified through INSERT, UPDATE and DELETE statements.
Schema Schema modification locks (Sch-M) are acquired when data definition language statements, such as CREATE TABLE, CREATE INDEX, ALTER TABLE, and so on are being executed.
Schema stability locks (Sch-S) are acquired when store procedures are being compiled.
Bulk Update Bulk update locks (BU) are used when performing a bulk-copy of data into a table with TABLOCK hint. These locks improve performance while bulk copying data into a table; however, they reduce concurrency by effectively disabling any other connections to read or modify data in the table.

Lock modeDescription
Shared (S)Protects a resource for read access. No other transactions can modify the data while shared (S) locks exist on the resource.
Exclusive (X)Indicates a data modification, such as an insert, an update, or a deletion. Ensures that multiple updates cannot be made to the same resource at the same time.
Update (U)Prevents a common form of deadlock. Only one transaction at a time can obtain a U lock on a resource. If the transaction modifies the resource, then the U lock is converted to an X lock.
SchemaUsed when an operation dependent on the schema of a table is executing. The types of schema locks are schema modification (Sch-M) and schema stability (Sch-S).
IntentEstablishes a lock hierarchy. The most common types of intent lock are IS, IU, and IX. These locks indicate that a transaction is operating on some, but not all, resources lower in the hierarchy. The lower-level resources will have an S, U, or X lock.




What are different transaction levels in SQL SERVER?

Transaction Isolation level decides how is one process isolated from other process.Using transaction levels you can implement locking in SQL SERVER.

  1. Dirty Reads – This is when you read uncommitted data, when doing this there is no guarantee that data read will ever be committed meaning the data could well be bad.
  2. Phantom Reads – This is when data that you are working with has been changed by another transaction since you first read it in. This means subsequent reads of this data in the same transaction could well be different.

There are Five transaction levels in SQL SERVER :-

READ COMMITTED The shared lock is held for the duration of the transaction, meaning that no other transactions can change the data at the same time. Other transactions can insert and modify data in the same table, however, as long as it is not locked by the first transaction.
This is default level of SQL Server

READ UNCOMMITTED No shared locks and no exclusive locks are honored. This is the least restrictive isolation level resulting in the best concurrency but the least data integrity.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED 

REPEATABLE READ This setting disallows dirty and non-repeatable reads. However, even though the locks are held on read data, new rows can still be inserted in the table, and will subsequently be interpreted by the transaction.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ  

SERIALIZABLE This is the most restrictive setting holding shared locks on the range of data. This setting does not allow the insertion of new rows in the range that is locked; therefore, no phantoms are allowed. Following is the syntax for setting transaction level see below
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE 

SNAPSHOT
This provides the same guarantees as serializable.
So what's the difference? Well it’s more in the way it works, using snapshot doesn't block other queries from inserting or updating the data touched by the snapshot transaction. Instead row versioning is used so when data is changed the old version is kept in tempdb so existing transactions will see the version without the change. When all transactions that started before the changes are complete the previous row version is removed from tempdb. This means that even if another transaction has made changes you will always get the same results as you did the first time in that transaction.
ALTER DATABASE IsolationTests  
SET ALLOW_SNAPSHOT_ISOLATION ON 
refer for more info transaction levels see below link

http://gavindraper.com/2012/02/18/sql-server-isolation-levels-by-example/

What is the difference between DELETE TABLE and TRUNCATE TABLE commands? 
Following are difference between them :-
√ DELETE TABLE syntax logs the deletes thus make the delete operation slow. TRUNCATE table does not log any information but it logs information about deallocation of data page of the table so TRUNCATE table is faster as compared to delete table.
√ DELETE table can have criteria while TRUNCATE can not. √ TRUNCATE table can not trigger.
What is DTS?
DTS is used to pull data from various sources into the star schema.
What is fillfactor ?
Twist :- When does page split occurs ?
 The 'fill factor' option specifies how full SQL Server will make each index page. When there is no free space to insert new row on the index page, SQL Server will create new index page and transfer some rows from the previous page to the new one. This operation is called page splits. You can reduce the number of page splits by setting the appropriate fill factor option to reserve free space on each index page. The fill factor is a value from 1 through 100 that specifies the percentage of the index page to be left empty. The default value for fill factor is 0. It is treated similarly to a fill factor value of 100, the difference in that SQL Server leaves some space within the upper level of the index tree for FILLFACTOR = 0. The fill factor percentage is used only at the time when the index is created. If the table contains read-only data (or data that very rarely changed), you can310 set the 'fill factor' option to 100. When the table's data is modified very often, you can decrease the fill factor to 70% or whatever you think is best.
 How can you increase SQL performance ? 
√ Every index increases the time takes to perform INSERTS, UPDATES and DELETES, so the number of indexes should not be too much. Try to use maximum 4-5 indexes on one table, not more. If you have read-only table, then the number of indexes may be increased.
√ Keep your indexes as narrow as possible. This reduces the size of the index and reduces the number of reads required to read the index.
√ Try to create indexes on columns that have integer values rather than character values.
√ If you create a composite (multi-column) index, the order of the columns in the key are very important. Try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
√ If you want to join several tables, try to create surrogate
integer keys for this purpose and create indexes on their columns.
√ Create surrogate integer primary key (identity for example) if
your table will not have many insert operations.
√ Clustered indexes are more preferable than nonclustered, if you
need to select by a range of values or you need to sort results set
with GROUP BY or ORDER BY.
√ If your application will be performing the same query over and
over on the same table, consider creating a covering index on the
table.
 √ You can use the SQL Server Profiler Create Trace Wizard
with "Identify Scans of Large Tables" trace to determine which
tables in your database may need indexes. This trace will show
which tables are being scanned by queries instead of using an
index.
What are indexes? What is the difference between
clustered and nonclustered indexes? Indexes in SQL Server are
similar to the indexes in books. They help SQL Server retrieve the data quickly. There are clustered and
nonclustered indexes.
A clustered index is a special type of index that reorders the way in which records
in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a
clustered index contain the data pages.
A nonclustered index is a special type of index in which the
logical order of the index does not match the physical stored order of the rows on disk. The leaf node of
a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.
What is normalization? What are different types
of normalization?
It is set of rules that have been established to aid in the design
of tables that are meant to be connected through relationships.
This set of rules is known as Normalization. Benefits of
normalizing your database will include:
√ Avoiding repetitive entries
√ Reducing required storage space
√ Preventing the need to restructure existing tables to
accommodate new data.
√ Increased speed and flexibility of queries, sorts, and summaries
First Normal Form For a table to be in first normal form, data must be broken up into the
smallest units possible.In addition to breaking data up into the smallest meaningful values, tables in first
normal form should not contain repetitions groups of fields.
Figure :- 10.1 Repeating groups example
For in the above example city1 and city2 are repeating.In order that this table to be in First normal form
you have to modify the table structure as follows.Also not that the Customer Name is now broken down
to first name and last name (First normal form data should be broken down to smallest unit).
Second Normal form The second normal form states that each field in a multiple field primary
keytable must be directly related to the entire primary key. Or in other words, each non-key field should
be a fact about all the fields in the primary key. In the above table of customer, city is not linked to any
primary field.
Figure :- 10.3 Normalized customer table.
Figure :- 10.4 City is now shifted to a different master table.
That takes our database to a second normal form.
Third normal form A non-key field should not depend on other Non-key field. The field "Total"
is dependent on "Unit price" and "qty".
Figure :- 10.5 Fill third normal form
So now the "Total" field is removed and is multiplication of Unit
price * Qty.
What is denormalization ?
Denormalization is the process of putting one fact in numerous
places (its vice-versa of normalization).Only one valid reason
exists for denormalizing a relational design - to enhance
performance.The sacrifice to performance is that you increase
redundancy in database.
What is a candidate key ? A table may have more than one combination of
columns that could uniquely identify the rows in a table; each combination is a candidate key. During
database design you can pick up one of the candidate keys to be the primary key. For example, in the
supplier table supplierid and suppliername can be candidate key but you will only pick up supplierid as
the primary key.

What does Composite Key mean?

A composite key, in the context of relational databases, is a combination of two or more columns in a table that can be used to uniquely identify each row in the table. Uniqueness is only guaranteed when the columns are combined; when taken individually the columns do not guarantee uniqueness.

.


What is the use (purpose) of master model msdb and tempdb databases in MS SQL Server?


Microsoft SQL Server  has totally FIVE  four system databases plus one hidden Database (Resource db)
master: 
The master database records all of the system level information for a SQL Server system. It records all login accounts and all system configuration settings. The master database records the existence of all other databases, including the location of the database files. master also records the initialization information for SQL Server. DBAs should always have a recent backup of master available.
tempdb: 
tempdb holds all temporary tables and variables and temporary stored procedures,Cursors. It also fills any other temporary storage needs such as work tables generated by SQL Server. tempdb is a global resource. The temporary tables and stored procedures for all users connected to the system are stored in tempdb. tempdb is re-created every time SQL Server is started so the system starts with a clean copy of the database. Because temporary tables and stored procedures are dropped automatically on disconnect, and no connections are active when the system is shut down, there is never anything in tempdb to be saved from one session of SQL Server to another.
By default, tempdb autogrows as needed while SQL Server is running. Unlike other databases, however, it is reset to its initial size each time the database engine is started. If the size defined for tempdb is small, part of your system processing load may be taken up with autogrowing tempdb to the size needed to support your workload each time to restart SQL Server. You can avoid this overhead by using
ALTER DATABASE to increase the size of tempdb.
model: 
The model database is used as the template for all databases created on a system. When a CREATE DATABASE statement is issued, the first part of the database is created by copying in the contents of the model database, then the remainder of the new database is filled with empty pages. Because tempdb is created every time SQL Server is started, the model database must always exist on a SQL Server system.
msdb: 
The msdb database is used by SQL Server Agent for scheduling alerts and jobs, hosting DTS packages and recording operators.

Resource:
The Resource database is a read-only database that contains all the system objects that are included with SQL Server 2005. SQL Server system objects, such as sys.objects, are physically persisted in the Resource database, but they logically appear in the sys schema of every database. The Resource database does not contain user data or user metadata

The ID of the Resource database is always 32767.

The physical file name of the Resource database are mssqlsystemresource.mdf and mssqlsystemresource.ldf.

By default, these files are located in <drive>:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\. Each instance of SQL Server has one and only one associated mssqlsystemresource.mdf file, and instances do not share this file. In a cluster, the Resource database exists in the \Data folder on a shared disk.


To determine the version number of the Resource database, use:
SELECT SERVERPROPERTY('ResourceVersion');
To determine when the Resource database was last updated, use:
SELECT SERVERPROPERTY('ResourceLastUpdateDateTime');


Difference between SQL Server 2005 and SQL Server 2008?


  • In sql server 2005,There is no option to compress backup files, but in sql server 2008,there you find it. storing backup file takes 5 minutes without compression in sqlserver 2005,but it takes only 3 minutes in sql server 2008 for storing backup files with compression. 
  • SQL2008 has support for additional datatypes:
  1. date
  2. time
  3. geospatial
  4. timestamp with internal timezone
  • PBM Policy-Based Management is another added feature introduced with SQL Server 2008. PBM allows you to define and enforce policies for configuring and managing SQL Server across your enterprise. It goes hand-in-hand with CMS
  • when we write select * from --- it ask to for table name (2008) but 2005 does not ask
  • Server 2008 also added CMS which is Central Management Server. It only works with Windows Authentication but it allows you to management multiple SQL Servers at once. If SQL Server systems are popping up like weeds it will appear in the CMS provided that they point to the CMS via SSMS. Its a really cool feature. 

Software Development Life Cycle (SDLC)
·         The software development life cycle (SDLC) (sometimes referred to as the system development life cycle) is the process of creating or altering software systems, and the models and methodologies that people use to develop these systems.  In any such lifecycle, people, process and technology all play a role in success.  PTC is committed to developing solutions for engineering (Software System Lifecycle Management) and IT (Application Lifecycle Management) organizations along with world class technology (Integrity) to accelerate innovation in software intensive products and systems.
    software development life cycle (SDLC)  stages include below
  Analysis
 Design
  Implementation
  Testing
  Evaluation

What is the difference between a Table, View and Synonym in SQL?
Table is a repository of data, where in the table itself is a physical entity. The table resides physically in the database.

View is  a virtual table  and not a part of the database's physical representation. It is precompiled, so that data retrieval behaves faster, and also provide a secure accessibility mechanism

Main advantage of using views
  • You can ristrict access to predetermined set of rows and columns of a table
  • You can hide complexity of query 
  • You can hide complexity of calculation

How to Trapping Errors in Stored Procedures?
   BEGIN TRY
    EXEC ParentError
  END TRY
  BEGIN CATCH
    SELECT Error_Line = ERROR_LINE(),
           Error_Proc = ERROR_PROCEDURE(),
           [errorline]=error_line(),
         [errornumber]=error_number(),
         [errormessage]=error_message(),
         [errorseverity]=error_severity(),
         [errorstate]=error_state(),
  END CATCH

  errorline
   errornumber
errormessage
   errorseverity
   errorstate
Error_Proc
6
208
       Invalid object name 'adminusers'.
16
     1
      adminprocdestord


.NET Q & A:

Which is the best place to store connectionstring in
.NET projects ?
Config files are the best places to store connectionstrings. If it
is a web-based application “Web.config” file will be used and if it
is a windows application “App.config” files will be used.
What is Object Oriented Programming ? It is a problem solving technique to develop software systems. It is a technique to think real world in terms of objects. Object maps the software model to real world concept. These objects have responsibilities and provide services to application or other objects. What’s a Class ? A class describes all the attributes of objects, as well as the methods that implement the behavior of member objects. It’s a comprehensive data type which represents a blue print of objects. It’s a template of object. What’s an Object ? It is a basic unit of a system. An object is an entity that has attributes, behavior, and identity. Objects are members of a class. Attributes and behavior of an object are defined by the class definition. What are different properties provided by Object-oriented systems ? Abstraction It allows complex real world to be represented in simplified manner. Example color is abstracted to RGB. By just making the combination of these three colors we can achieve any color in world.It’s a model of real world or concept. Encapsulation It is a process of hiding all the internal details of an object from the outside world. Inheritance Hierarchy is used to define more specialized classes based on a preexisting generalized class. Example we have VEHICLE class and we can inherit this class make more specialized class like CAR, which will add new attributes and use some existing qualities of the parent class. Its shows more of a parent-child relationship. This kind of hierarchy is called inheritance. Polymorphism When inheritance is used to extend a generalized class to a more specialized class, it includes behavior of the top class(Generalized class). The inheriting class often implement a behavior that can be somewhat different than the generalized class, but the name of the behavior can be same. It is important that a given instance of an object use the correct behavior, and the property of polymorphism allows this to happen automatically. What are abstract classes ? Abstract class is designed to act as a base class (to be inherited by other classes). Abstract class is a design concept in program development and provides a base upon which other classes are built. What is a Interface ? Interface is a contract that defines the signature of the functionality. So if a class is implementing a interface it says to the outer world, that it provides specific behavior. Example if a class is implementing Idisposable interface that means it has a functionality to release unmanaged resources. Now external objects using this class know that it has contract by which it can dispose unused unmanaged objects. √ Single Class can implement multiple interfaces.
√ If a class implements a interface then it has to provide implementation to all its methods. What are queues and stacks ? Queue is for first-in, first-out (FIFO) structures. Stack is for last-in, first-out (LIFO) structures. What is Operator Overloading in .NET? It provides a way to define and use operators such as +, -, and / for user-defined classes or structs. It allows us to define/redefine the way operators work with our classes and structs. This allows programmers to make their custom types look and feel like simple types such as int and string. VB.NET till now does not support operator overloading. Operator overloading is done by using the “Operator” keyword. Note:- Operator overloading is supported in VB.NET 2005
ASP.NET What’ is the sequence in which ASP.NET events are processed ? Following is the sequence in which the events occur :-
√ Page_Init.
√ Page_Load.
 √ Control events
√ Page_Unload event. Page_init event only occurs when first time the page is started, but Page_Load occurs in subsequent request of the page. Following is the sequence in which the events occur :-
 √ Page_Init.
 √ Page_Load.
√ Control events
√ Page_Unload event. Page_init event only occurs when first time the page is started, but Page_Load occurs in subsequent request of the page.

 What is the use of @ Register directives ? @Register directive informs the compiler of any custom server control added to the page. What is AppSetting Section in ―Web.Config‖ file ? Web.config file defines configuration for a webproject. Using “AppSetting” section we can define user defined values. Example below defined is “ConnectionString” section which will be used through out the project for database connection. <configuration> <appSettings> <add key="ConnectionString" value="server=xyz;pwd=www;database=testing" /> </appSettings>

What’s the use of ―GLOBAL.ASAX file ? It allows to executing ASP.NET application level events and setting application-level variables

What is Object Oriented Programming ?

It is a problem solving technique to develop software systems. It is a technique to think
real world in terms of objects. Object maps the software model to real world concept.
These objects have responsibilities and provide services to application or other objects.

What’s a Class ?

A class describes all the attributes of objects, as well as the methods that implement the
behavior of member objects. It’s a comprehensive data type which represents a blue print
of objects. It’s a template of object.

What’s an Object ?

It is a basic unit of a system. An object is an entity that has attributes, behavior, and
identity. Objects are members of a class. Attributes and behavior of an object are defined
by the class definition.


What is the relation between Classes and Objects ?
They look very much same but are not same. Class is a definition, while object is a
instance of the class created. Class is a blue print while objects are actual objects existing
in real world. Example we have class CAR which has attributes and methods like Speed,
Brakes, Type of Car etc. Class CAR is just a prototype, now we can create real time
objects which can be used to provide functionality. Example we can create a Maruti car
object with 100 km speed and urgent brakes.

What are different properties provided by Objectoriented
systems ?

Abstraction
It allows complex real world to be represented in simplified manner. Example color is
abstracted to RGB. By just making the combination of these three colors we can achieve
any color in world.It’s a model of real world or concept.

Encapsulation
It is a process of hiding all the internal details of an object from the outside world.

Communication using messages
When application wants to achieve certain task it can only be done using combination of
objects. A single object can not do all the task. Example if we want to make order processing
form.We will use Customer object, Order object, Product object and Payment object to
achieve this functionality. In short these objects should communicate with each other.
This is achieved when objects send messages to each other.
Object lifetime
All objects have life time.Objects are created ,and initialized, necessary functionalities
are done and later the object is destroyed. Every object have there own state and identity
which differ from instance to instance.

What is difference between Association, Aggregation and Inheritance relationships?
In object oriented world objects have relation and hierarchies in between them.
There are basically three kind of relationship in Object Oriented world :-

Association
This is the simplest relationship between objects. Example every customer has sales. So
Customer object and sales object have an association relation between them.

Aggregation
This is also called as composition model. Example in order to make a “Accounts” class it
has use other objects example “Voucher”, “Journal” and “Cash” objects. So accounts
class is aggregation of these three objects.

Inheritance
Hierarchy is used to define more specialized classes based on a preexisting generalized
class. Example we have VEHICLE class and we can inherit this class make more
specialized class like CAR, which will add new attributes and use some existing qualities
of the parent class. Its shows more of a parent-child relationship. This kind of hierarchy
is called inheritance.

Polymorphism
When inheritance is used to extend a generalized class to a more specialized class, it
includes behavior of the top class(Generalized class). The inheriting class often implement
a behavior that can be somewhat different than the generalized class, but the name of the
behavior can be same. It is important that a given instance of an object use the correct
behavior, and the property of polymorphism allows this to happen automatically.


What is a Assembly?

√ Assembly is unit of deployment like EXE or a DLL.

√ An assembly consists of one or more files (dlls, exe’s, html files etc.), and
represents a group of resources, type definitions, and implementations of those
types. An assembly may also contain references to other assemblies. These
resources, types and references are described in a block of data called a manifest.
The manifest is part of the assembly, thus making the assembly self-describing.

√ An assembly is completely self-describing.An assembly contains metadata
information, which is used by the CLR for everything from type checking and
security to actually invoking the components methods. As all information is in the
assembly itself, it is independent of registry. This is the basic advantage as
compared to COM where the version was stored in registry.

√ Multiple versions can be deployed side by side in different folders. These
different versions can execute at the same time without interfering with each
other. Assemblies can be private or shared. For private assembly deployment, the
assembly is copied to the same directory as the client program that references
it. No registration is needed, and no fancy installation program is required.

When the component is removed, no registry cleanup is needed, and no uninstall
program is required. Just delete it from the hard drive.

√ In shared assembly deployment, an assembly is installed in the Global Assembly
Cache (or GAC). The GAC contains shared assemblies that are
globally accessible to all .NET applications on the machine.


What are the different types of Assembly?
There are two types of assembly Private and Public assembly.
A private assembly is normally used
by a single application, and is stored in the application's directory, or a sub-directory beneath. A
shared assembly is normally stored in the global assembly cache, which is a repository of assemblies
maintained by the .NET runtime. Shared assemblies are usually libraries of code which many
applications will find useful, e.g. Crystal report classes which will be used by all application for
Reports.


What is NameSpace?
Namespace has two basic functionality :-

√ NameSpace Logically group types, example System.Web.UI logically groups
our UI related features.

√ In Object Oriented world many times its possible that programmers will use the
same class name.By qualifying NameSpace with classname this collision is able to
be removed.


What is Difference between NameSpace and Assembly?

Following are the differences between namespace and assembly :

√ Assembly is physical grouping of logical units. Namespace logically groups
classes.

√ Namespace can span multiple assembly.

What is webconfig file?

Web.config is the main settings and configuration file for an ASP.NET web application. The file is an XML document that defines configuration information regarding the web application. The web.config file contains information that control module loading, security configuration, session state configuration, and application language and compilation settings. Web.config files can also contain application specific items such as database connection strings.

Can we Create Index on View?
Cannot create index on view 'coursesview' because the view is not schema bound
ex:

select * from courses---table

create view coursesview as
select * from courses
select * from coursesview--view

create index viewidx on coursesview(cname)
Cannot create index on view 'coursesview' because the view is not schema bound.

create index courseidx on courses(cname)
index created on table

Full-Text Catalog?
  •  A full-text catalog is a physical structure that must reside on the local hard drive associated with the SQL Server instance. Each catalog is part of a specific filegroup. If no filegroup is specified when the catalog is created, the default filegroup is used.

CREATE FULLTEXT CATALOG ProductFTS
WITH ACCENT_SENSITIVITY = OFF

Full-text indexing? Steps to involved in  full-text indexing in SQL Server?

Full-text search refers to the functionality in SQL Server that supports full-text queries against character-based data. These types of queries can include words and phrases as well as multiple forms of a word or phrase. To support full-text queries, full-text indexes must be implemented on the columns referenced in the query.

To implement full-text indexing in SQL Server, you should take the following steps:
  1. Create a full-text catalog, if necessary.
  2. Create the full-text index.
  3. Modify the list of noise words (SQL Server 2005) or stop words (SQL Server 2008), if necessary.
  4. Modify the thesaurus for the language being used, if necessary. 
Can we use CTE inside stored procedure?
Yes we can use CTE inside the stored procedure see below:

create procedure sample 
@period1 datetime,@period2 datetime
as
begin
set nocount on;

with Salary(totalsalary,id) as (

select sum(salary) as totalsalary,id from empsalary
where paydate > ''+cast(@period1 as varchar(150))+'' and paydate < ''+cast(@period2 as varchar(150))+''
group by id

)

select address.city,address.state,Salary.totalsalary
from address inner join
Salary on address.id = Salary.id

end

Execution:
exec sample '01-01-2009','05-05-2009'


Consider one table A can have id coulmn consist of 4 1's and table bcan have id coulmn consist of 4 1's
that is  Table A                                                          Table B
             1                                                                         1
             1                                                                         1
             1                                                                         1
             1                                                                         1
            
  Questions are if  we put count(* ) and join the two table what is the out put?
Ans:

select select count(ta.taid) from ta
full join tb on ta.taid=tb.taid   

O/P: 16
What are different normalization forms?


1.    1NF: Eliminate Repeating Groups Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.
2.    2NF: Eliminate Redundant Data If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3.    3NF: Eliminate Columns Not Dependent On Key If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key.
4.    BCNF: Boyce-Codd Normal Form If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.
5.    4NF: Isolate Independent Related Multiple Relationships No table may contain two or more 1:n or n:m relationships that are not directly related.
6.    5NF: Isolate Semantically Related Multiple Relationships There may be practical constrains on information that justify separating logically related many-to-many relationships.
7.    ONF: Optimal Normal Form A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
8.    DKNF: Domain-Key Normal Form A model free from all modification anomalies is said to be in DKNF.
Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database.


What is the purpose of SQL Profiler in SQL server?

AnswerSQL Profiler captures SQL Server events from a server. The events are saved in a trace file that can be used to analyze and diagnose problem.
The different purposes of using SQL Profiler are:
It is used to find the cause of the problem by stepping through problem queries.
It is very useful to analyze the cause of slow running queries.
It can be used to tune workload of the SQL server.
It also stores security-related actions that can be reviewed by a security administrator.
SQL Profiler also supports auditing the actions performed on instances of SQL Server.


Can we see Input/Output Parameters of SP's using SQL Profiler?

http://sqltechi.blogspot.in/2015/05/to-find-out-sp-output-parameters-using.html

We can use SQL Profiler to directly see the input parameter values of the stored procedure.

If the stored procedure returns a result set or can be executed repeatedly, we can execute the stored procedure with the captured input values to see the output.

 How to get second-highest salary employees in a table?

ex:-

 1    alex    5000
 2    gravin    7500
 3    boxerlin  6580
 4    manju    2850
 5    meeralina  7459
 6    asra    7458

SELECT *
FROM   Employees
WHERE  Salary = (SELECT MIN(Salary)
                 FROM   (SELECT DISTINCT TOP (2) Salary
                         FROM   Employees
                         ORDER  BY Salary DESC) T)
                      
                         (OR)
                        
WITH T AS
(
SELECT *,
       DENSE_RANK() OVER (ORDER BY Salary Desc) AS Rnk
FROM Employees
)
SELECT *
FROM T
WHERE Rnk=2;

Ans: 5 meeralina 7459

or

Select * from ( select  Row_number() over (  order by salary desc ) as Row_INDEX ,
 * From employees ) as Temp where Row_INDEX = 2

Find all references Of Object Table , Stored Procedures, Scalar function In sql Server

SELECT DISTINCT o.name AS ObjectName,
CASE o.xtype
WHEN 'C' THEN 'CHECK constraint'
WHEN 'D' THEN 'Default or DEFAULT constraint'
WHEN 'F' THEN 'FOREIGN KEY constraint'
WHEN 'FN' THEN 'Scalar function'
WHEN 'IF' THEN 'In-lined table-function'
WHEN 'K' THEN 'PRIMARY KEY or UNIQUE constraint'
WHEN 'L' THEN 'Log'
WHEN 'P' THEN 'Stored procedure'
WHEN 'R' THEN 'Rule'
WHEN 'RF' THEN 'Replication filter stored procedure'
WHEN 'S' THEN 'System table'
WHEN 'TF' THEN 'Table function'
WHEN 'TR' THEN 'Trigger'
WHEN 'U' THEN 'User table'
WHEN 'V' THEN 'View'
WHEN 'X' THEN 'Extended stored procedure'
ELSE o.xtype
END AS ObjectType,
ISNULL( p.Name, '[db]') AS Location
FROM syscomments c
INNER JOIN sysobjects o ON c.id=o.id
LEFT JOIN sysobjects p ON o.Parent_obj=p.id
WHERE c.text LIKE '%any text to search for%'
ORDER BY Location, ObjectName

  How to Implement First Normal Form?(365 Media)

  Introduction

Normalization is used to organize the content of table. Normalization is a technique which eliminate redundancy and inconsistent dependency of data from database. Normalization makes database more flexible.
Basic Goal of normalization
  • Arrange data in database in to logical grouping.
  • Eliminate redundancy of data, it means minimizing the duplicate data in database.
  • Eliminate inconsistency of data 
First Normal Form
First normal form does not allow multivolume attribute, composite attribute and their combination. In other word we can say it allow only single scalar value in each column.
Example
In this table data is not normalized. It have multi value in subject column. so we need to normalized it. To make this table in first normal form we put only single value in each column.
Teacher_info table that contain repeating value.
Teacher ID Teacher Name Subject
D001 Vipendra  SQL Server, Oracle, Java
D002 Deepak  Accounting, C, C++
Teacher_info table in First Normal Form
Teacher ID Teacher Name Subject
D001 Vipendra  SQL Server
D001 Vipendra  Oracle
D001 Vipendra  Java
D002 Deepak  C
D002 Deepak  C++
D002 Deepak  Accounting

What is LOCKING? 
LOCKING is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution.


Lock granularity refers to the level at which locks occur:
  • Row
  • Table
  • Page
  • Database
Locks Description
RIDRow identifier. Used to lock a single row within a table.
PAGData page or index page.
TABEntire table, including all data and indexes
MDTable metadata. Used to protect the table schema
DBDatabase

  • DML – Data Manipulation Language (SELECT, UPDATE, INSERT statements)
  • DDL – Data Definition Language (CREATE, ALTER, DROP statements)
  • DCL – Data Control Language (GRANT, REVOKE statements)
  • TCL – Transactional Control Language (COMMIT, ROLLBACK statements) 

How to get the employees with their managers?

 




What are the difference between Having and Where Clause?
1) Where clause can be used with Select, Update and Delete Statement Clause but having clause can be used only with Select statement.
2) We can't use aggregate functions in the where clause unless it is in a subquery contained in a HAVING clause whereas  we can use aggregate function in Having clause. We can use column name in Having clause but the column must be contained in the group by clause.

 3) Where Clause is used on the individual records whereas Having Clause in conjunction with Group By Clause work on the record sets ( group of records ).




Lock modeDescription
Shared (S)Protects a resource for read access. No other transactions can modify the data while shared (S) locks exist on the resource.
Exclusive (X)Indicates a data modification, such as an insert, an update, or a deletion. Ensures that multiple updates cannot be made to the same resource at the same time.
Update (U)Prevents a common form of deadlock. Only one transaction at a time can obtain a U lock on a resource. If the transaction modifies the resource, then the U lock is converted to an X lock.
SchemaUsed when an operation dependent on the schema of a table is executing. The types of schema locks are schema modification (Sch-M) and schema stability (Sch-S).
IntentEstablishes a lock hierarchy. The most common types of intent lock are IS, IU, and IX. These locks indicate that a transaction is operating on some, but not all, resources lower in the hierarchy. The lower-level resources will have an S, U, or X lock.







What are the difference between Query, Subquery, Nested Subquery and the Corelated Subquery?

Query: - Query can be defined as a way to inquire the data from the database. It is used to extract the data from one table or multiple tables depending upon the user needs.

Subquery:-If a sql statement contains another sql statement then the sql statement which is inside another sql statement is called Subquery. It is also known as nested query. The Sql Statement which contains the other sql statement is called Parent Statement.

Nested Subquery:-If a Subquery contains another subquery, then the subquery inside another subquery is called nested subquery.


Correlated Subquery:-If the outcome of a subquery is depends on the value of a column of its parent query table then the Sub query is called Correlated Subquery. 

Difference Between Replication and Database Mirroring:
  • In Mirroring Secondary database is in an offline mode while mirroring is in place. This is to ensure integrity in the secondary database in the event of a failover being required. Nothing can write to the secondary database, so you know it will exactly match your primary. But in Replication Secondary database is online and searchable. Basically you can run queries against your secondary database 

Why was table have only one Clustered index per Table?

A clustered index sorts and stores the data rows in the table based on the index key values.

Therefore only one clustered index can be created on each table because the data rows themselves can only be sorted in one order.

Non clustered Index
Nonclustered indexes have the same B-tree structure as clustered indexes, with two significant differences:

  • The data rows are not sorted and stored in order based on their nonclustered keys.
  • The leaf layer of a nonclustered index does not consist of the data pages.  
Difference between Index Rebuild and Index Reorganize?

or

What is difference between rebuild and reorganize the indexes in SQL Server 2005?
or

When should i go for rebuild the index and when should i go
for reorganize the index?

Index Rebuild: This process drops the existing Index and Recreates the index.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REBUILD
GO
Index Reorganize: This process physically reorganizes the leaf nodes of the index.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REORGANIZE
GO
You need to check the fragmentation (using DBCC showcontig) for indices to decide what to go for rebuild or reorganize.

Recommendation:

Index should be rebuild when index fragmentation is great than 40%. Index should be reorganized when index fragmentation is between 10% to 40%. Index
rebuilding process uses more CPU and it locks the database resources.
Create a primary key with out clustered index View(s): 7538


Is it possible to create a primary key with out clustered index?

Actually there are three methods by which we can achieve this.

Method 1:
Using this method we can specify it while creating the table schema itself.
Create Table tblTest
(
Field1 int Identity not null primary key nonclustered,
Field2 varchar(30),
Field3 int null
)
Go
Method 2:
Using this method also we could specify it while creating the table schema. It just depends on your preference.
Create Table tblTest
(
Field1 int Identity not null,
Field2 varchar(30),
Field3 int null
Constraint pk_parent primary key nonclustered (Field1)
)
Go
Method 3:
If at all you already have a table which have a clustered index on a PK field then you might want to opt for this method.
Step 1: Find the contraint name
sp_helpconstraint tblTest
/*
This way we could find out the constraint name. Lets assume that our constraint name is PK_tblTest_74794A92
*/
Step 2: First drop the existing constaint
Alter table tblTest drop constraint PK_tblTest_74794A92
Step 3: Add the new nonclustered index to that field now

Alter table tblTest add constraint PK_parent1 primary key nonclustered (Field1)


What is the difference between clustered and nonclustered indexes?

Clustered Index:
1. A clustered index is a special type of index that reorders the way records in the table are physically stored.
2. Therefore table can have only one clustered index and this is usually made on the primary key. The leaf nodes of a clustered index contain the data pages.

Non-Clustered Index:
1. A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a
nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.

2. There can be as many as 249 nonclustered index per table

Difference between varchar and nvarchar:


Mainly nvarchar stores unicode characters and varchar stores non-unicodes characters.
"unicodes" means 16-bit character encoding scheme allowing characters from lots of other languages like Chinese, Japanese etc, to be encoded in a single character set.
That means unicodes is using 2 bytes per character to store and nonunicodes uses only 1 byte per character to store.
Example:Take French Language, in French Language Désolé can not be stored in varchar datatype The difference as indicated by the N prefix is that the N datatypes use a unicode codeing system for data.


If you are not storing the other language characters like French, Portuguese then both data type are same. varchar   datatype should be used instead. The reason for this is that nvarchar takes twice as much space as varchar, this is because of the need to store the extended character codes for other languages

how do I make a composite key with SQL Server Management Studio?

enter image description here
  1. Open the design table tab
  2. Highlight your two INT fields (Ctrl/Shift+click on the grey blocks in the very first column)
  3. Right click -> Set primary key
(or)


ALTER TABLE TABLENAME ADD CONSTRAINT CONSTRAINT_NAME PRIMARY KEY(COLUMN_NAME1,COLUMNAME2)


What is a candidate key ?
A table may have more than one combination of
columns that could uniquely identify the rows in a table; each combination is a candidate key. During
database design you can pick up one of the candidate keys to be the primary key. For example, in the
supplier table supplierid and suppliername can be candidate key but you will only pick up supplierid as
the primary key.

What is Composite key?

 Primary key of two or more attributes that uniquely identifies the row.

What is the difference between candidate key and composite key?

Candidate key is a unique key that can be used as a primary key. but not necessarily used as one.

Composite key is a Primary key of two or more attributes that uniquely identifies the row.

Difference between char and varchar data types in Sql Server?


Answer: 

Char: 
1.Fixed length memory storage 
2.CHAR takes up 1 byte per character 
3.Use Char when the data entries in a column are expected to be the same size 
5.Ex: 
Declare test Char(100); 
test="Test" - 
Then "test" occupies 100 bytes first four bytes with values and rest with blank data. 


VarChar: 
1.Variable length memory storage(Changeable) 
2.VARCHAR takes up 1 byte per character, + 2 bytes to hold length information 
3.varchar when the data entries in a column are expected to vary considerably in size. 
4.Ex: 
Declare test VarChar(100); 
test="Test" - 
Then "test" occupies only 4+2=6 bytes. first four bytes for value and other two bytes for variable length information. 

example illustrates the basic difference explained above:
DECLARE @CharName Char(20) = 'Ball',
 @VarCharName VarChar(20) = 'Ball'
SELECT DATALENGTH(@CharName) CharSpaceUsed,
 DATALENGTH(@VarCharName) VarCharSpaceUsed
Result:
CharSpaceUsed VarCharSpaceUsed
------------- ----------------
20            4
(1 row(s) affected)


OLTPOn Line Transaction Processing, describes processing of short and simple transaction data at operational sites i.e. day to day operations in the Source systems. The Database is designed as Application-oriented (E-R based) i.e Highly Normalized so as to efficiently support INSERT and UPDATE operations. Data stored in these systems are raw Current (Up-to-date) and Isolated Data, in a much Detailed level in flat relational tables.
OLAPOn Line Analytical Processing, describes processing at the Centralized, Integrated and Consistent Data Warehouse. It acts as the Decision Support System for the Business End Users. The Database is designed as Subject-oriented (Star/Snowflake Schema) i.e. highly denormalized to support the SELECT operations. Data in these systems are generally Consolidated,Summarized and Historical Data in nature.

What is the difference between OLTP and OLAP?

OLTP is the transaction system that collects business data. Whereas OLAP is the reporting and analysis system on that data.
OLTP systems are optimized for INSERT, UPDATE operations and therefore highly normalized. On the other hand, OLAP systems are deliberately denormalized for fast data retrieval through SELECT operations.
Explanatory Note:
In a departmental shop, when we pay the prices at the check-out counter, the sales person at the counter keys-in all the data into a "Point-Of-Sales" machine. That data is transaction data and the related system is a OLTP system.

On the other hand, the manager of the store might want to view a report on out-of-stock materials, so that he can place purchase order for them. Such report will come out from OLAP system

Difference Between SQL 2008 and SQL 2012?

Ref: http://www.slideshare.net/umarali1981/difference-between-sql-server-2008-and-sql-server-2012








Query:
Table A have  3 records but 1 records have null 
Table B have 4 records but  no null records
When we use INNER JOIN ,CROSS JOIN,LEFT OUTER JOIN ,RIGHT OUTER JOIN then
How many records are in the each table?

Sol:


Declare @TableA table(id int,name varchar(10))

Declare @TableB table(id int,name varchar(10))

Insert into @TableA
select 1,'aaaa'
union
select 2,'bbbb'
union
select null,null

Insert into @TableB
select 1,'aaaa'
union
select 2,'bbbb'
union
select 3,'cccc'
union
select 4,'dddd'

select * from @TableA
select * from @TableB

select 'INNER JOIN Records',*
from @TableA A
INNER JOIN @TableB B on A.id=B.id 

select 'FULL JOIN Records',* 
from @TableA A
FULL JOIN @TableB B on A.id=B.id 

select 'LEFT JOIN Records',*
from @TableA A
LEFT JOIN @TableB B on A.id=B.id 

select 'RIGHT JOIN Records',*
from @TableA A
RIGHT JOIN @TableB B on A.id=B.id 




What is FileGroups? What is the use in SQL Server?

At a minimum, every SQL Server database has two operating system files: a data file and a log file. Data files contain data and objects such as tables, indexes, stored procedures, and views. Log files contain the information that is required to recover all transactions in the database. Data files can be grouped together in filegroups for allocation and administration purposes.

SQL Server databases have three types of files, as shown in the following table.
File Description
Primary The primary data file contains the startup information for the database and points to the other files in the database. User data and objects can be stored in this file or in secondary data files. Every database has one primary data file. The recommended file name extension for primary data files is .mdf.
Secondary Secondary data files are optional, are user-defined, and store user data. Secondary files can be used to spread data across multiple disks by putting each file on a different disk drive. Additionally, if a database exceeds the maximum size for a single Windows file, you can use secondary data files so the database can continue to grow.
The recommended file name extension for secondary data files is .ndf.
Transaction Log The transaction log files hold the log information that is used to recover the database. There must be at least one log file for each database. The recommended file name extension for transaction logs is .ldf.
For example, a simple database named Sales can be created that includes one primary file that contains all data and objects and a log file that contains the transaction log information. Alternatively, a more complex database named Orders can be created that includes one primary file and five secondary files. The data and objects within the database spread across all six files, and the four log files contain the transaction log information.
By default, the data and transaction logs are put on the same drive and path. This is done to handle single-disk systems. However, this may not be optimal for production environments. We recommend that you put data and log files on separate disks.

Filegroups:

Every database has a primary filegroup. This filegroup contains the primary data file and any secondary files that are not put into other filegroups. User-defined filegroups can be created to group data files together for administrative, data allocation, and placement purposes.
For example, three files, Data1.ndf, Data2.ndf, and Data3.ndf, can be created on three disk drives, respectively, and assigned to the filegroup fgroup1. A table can then be created specifically on the filegroup fgroup1. Queries for data from the table will be spread across the three disks; this will improve performance. The same performance improvement can be accomplished by using a single file created on a RAID (redundant array of independent disks) stripe set. However, files and filegroups let you easily add new files to new disks.
All data files are stored in the filegroups listed in the following table.
Filegroup Description
Primary The filegroup that contains the primary file. All system tables are allocated to the primary filegroup.
User-defined Any filegroup that is specifically created by the user when the user first creates or later modifies the database.

Default Filegroup

When objects are created in the database without specifying which filegroup they belong to, they are assigned to the default filegroup. At any time, exactly one filegroup is designated as the default filegroup. The files in the default filegroup must be large enough to hold any new objects not allocated to other filegroups.
The PRIMARY filegroup is the default filegroup unless it is changed by using the ALTER DATABASE statement. Allocation for the system objects and tables remains within the PRIMARY filegroup, not the new default filegroup.

What are the Main diffrence between CTE's and TEMP tables?

CTE
WITH cte (Column1, Column2, Column3)
AS
(
    SELECT Column1, Column2, Column3
    FROM SomeTable
)

SELECT * FROM cte
Temp Table
SELECT Column1, Column2, Column3
INTO #tmpTable
FROM SomeTable

SELECT * FROM #tmpTable


CTEs:

  • Are unindexable (but can use existing indexes on referenced objects)
  • Cannot have constraints
  • Are essentially disposable VIEWs
  • Persist only until the next query is run
  • Can be recursive
  • Do not have dedicated stats (rely on stats on the underlying objects)
#Temp Tables:
  • Are real materialized tables that exist in tempdb
  • Can be indexed
  • Can have constraints
  • Persist for the life of the current CONNECTION
  • Can be referenced by other queries or subprocedures
  • Have dedicated stats generated by the engine
Can Refer CTE Many times inside SP? or How many time can we refer CTE inside SP?

Only One time u can refer.
when u refer second time it gives error.
Ex:
ALTER PROCEDURE [dbo].[GetLeaguePlayers]
(
    @idleague int,
    @pageNumber int,
    @pageSize int,
    @total int OUTPUT
)
AS
WITH CTEPlayers AS
(
    SELECT ROW_NUMBER() OVER (ORDER BY p.Name) AS RowNumber, p.Id, p.Name, t.Name AS Team
    FROM Players p INNER JOIN Teams t ON p.IdTeam=t.Id INNER JOIN Leagues l ON l.Id=t.IdLeague
    WHERE l.Id=@idleague
)
SELECT Id, Name
FROM CTEPlayers c
WHERE RowNumber>@pageSize*(@pageNumber-1) AND RowNumber<@pageSize*@pageNumber;
SET @total = ( SELECT COUNT(*) FROM CTEPlayers )

select Count(*) from CTEPlayers 

o/p:

In correct syntax near SET key word or select count(*) keyword.

A CTE is basically a disposable view.  It only persists for a single statement, and then automatically disappears.


What is columnstore index in SQL SERVER 2012?


COLUMNSTORE INDEX (Transact-SQL)

http://sqltechi.blogspot.in/2015/05/columnstore-indexes-in-sql-2012.html

Creates a nonclustered in-memory columnstore index on a SQL Server table. Use a non-clustered columnstore index to take advantage of columnstore compression to significantly improve query execution times on read-only data.

Create a simple nonclustered columnstore index

The following example creates a simple table and a clustered index, and then demonstrates the syntax of creating a nonclustered columnstore index.
CREATE TABLE SimpleTable
(ProductKey [int] NOT NULL, 
OrderDateKey [int] NOT NULL, 
DueDateKey [int] NOT NULL, 
ShipDateKey [int] NOT NULL);
GO
CREATE CLUSTERED INDEX cl_simple ON SimpleTable (ProductKey);
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csindx_simple
ON SimpleTable
(OrderDateKey, DueDateKey, ShipDateKey);
GO

How to monitor Replication latency\Delay ?

Ref:
http://www.mssqltips.com/sqlservertip/2258/monitor-sql-server-replication-latency-using-tracer-tokens/



Tracer tokens were introduced with Sql Server 2005 transactional replication as a way to monitor the latency of delivering transactions from the publisher to the distributor and from the distributor to the subscriber(s) (and as a side-effect, you can also get the time from the publisher to the subscriber(s)). A tracer token is basically a small piece of data that is inserted into the transaction log for the given publication just as if it were data for a standard database transaction (i.e. a new record that was inserted, or an update, or a delete, etc.) and is treated basically the same way as well (with the exception that no user data is obviously modified in either the publisher or subscriber). You insert tokens for a given publication at the publisher, and the progress of the token can be queried on the publisher as well.
Prior to Sql 2005, tracking the latency for transaction distribution in a replication topology was very tedious to say the least - some solutions included using 'dummy' tables to push record updates/inserts/deletes out to each subscriber, however much code had to be written to track the changes through the entire system, including the necessity typically to connect to the actual distributors/subscribers to query for the changed data and verify the replication of said data. With Sql 2005, tracer tokens provide a simple, easy, light-weight way for you to check the latency of a transactional replication topology automatically from a single point (i.e. your publisher).
There are multiple built-in system procedures that you can use to manage tracer tokens, some of which include:
- sp_posttracertoken (used to push a token into a publication) - sp_helptracertokens (returns a row for each tracer token that has been inserted at a publisher for a given publication) - sp_helptracertokenhistory (returns latency information for a given token id value retrieved from either of the procedures above) - sp_deletetracertokenhistory (removes token records from the system meta tables)
Having these system procedures makes it very easy to automate the monitoring of latency for your publications, possibly fire errors/emails/alerts if the latency exceeds certain thresholds, and store historical latency information for reporting purposes or historical tracking.  I've included a sample stored procedure that I call "sp_replCheckLatency_allPubs" that will do basically what it says - when run on a given instance, it will walk through every database on the server that includes a publication, then walk through each publication in the given database, and post a tracer token to it. Once all the publications in all DB's have had tokens posted, the procedure will then wait for a specified delay, and then proceed to walk back through each of the posted tokens to retrieve latency information (via the sp_helptracertokenhistory system procedure using the token ID value returned from the original posting of the token). If at any point a token is found where the latency for the token to reach the distributor/subscribers has exceeded a certain specified threshold, an error will be raised with detailed information about the publication, token latency, etc. Then, after all the tokens have been inspected for the specified threshold, all the latency information can be optionally logged to a table for historical tracking and reporting purposes, or just to have around for querying in the future to correlate network outages, lag times, etc. if necessary.
In an environment for my client, we use this procedure in a scheduled job on each publisher to run on a scheduled basis (every 10 minutes here today), and if an error is fired indicating replication latency over our specified threshold (in our environment we use a total latency to all subscribers of 1 minute or less), then an email is immediately sent to a DBA group and a copy of a report from Reporting Services is sent along as well to show the day's latency information. The report from reporting services is built from the captured data from the procedure, and allows DBAs and IT folks to see trending analysis on networks throughout the environment related to replication.
and more see  http://sqltechi.blogspot.in/2014/10/how-to-find-delaylatency-between.html

What are Disadvantages of Triggers in SQL Server?

 I see in having triggers are
-          Difficult to locate unless proper documentation.
-          they can’t be traced and are invisible to applications
-          Triggers are fired whenever a modification is made to a table; they thus slow down the performance of DML statements.
-          It’s difficult to track triggers logic.
What is blocking in SQL Server?
Answer: 
Blocking
"Blocking" is simply a term that means that one connection is holding a lock on a resource when another connection wants to read or write to it. It doesn't necessarily mean that the owner connection won't release it, just that it's currently holding it.
Compare this to the case with a child holding the ball. The child holding the ball is blocking all other children from holding the ball.

SQL Server: Difference Between Locking, Blocking and Dead Locking

kindly see below for  LOCKING, BLOCKING and DEAD LOCKING explanations.
Lets try to revisit these concepts with some simple analogies.
LOCKING occurs when connection needs access to a piece of data in database and it’s necessary for SQL Server when managing multiple connections. Just assume an example of your garage, when you park your car in garage, basically you are locking the place of garage.
LOCKING is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution.
Lock granularity refers to the level at which locks occur:
  • Row
  • Table
  • Page
  • Database
BLOCKING occurs when two connections need access to same piece of data concurrently and one connection is blocked because at a particular time, only one connection can have access
"Blocking" is simply a term that means that one connection is holding a lock on a resource when another connection wants to read or write to it
Just like, you stop (block) your car on a traffic signal because some other car or cars are using the crossing area.
DEAD LOCK occurs when one connection is blocked and waiting for a second to complete his work, but on other side, second connection is also waiting for first connection to release the lock. Just like, you need to cross the signal area but same time someone else from opposite side also want to cross the signal. Now, you need a way which other is holding and other need way where your car is.
That is why, one should be clear that locking is integral part of SQL Server to handle concurrency, blocking is bad when one connection/transaction is waiting unnecessary for a long time, and deadlocking is a phenomenon which should never occur
for more info
http://sqltechi.blogspot.in/2014/08/blocking-and-deadlocks-and-types-of.html
http://www.connectsql.com/2011/03/sql-server-difference-between-locking.html

Can Views take the Input Parameters ?

The answer is NO. It is not possible at any version of the SQL server as the view is not build for that purpose.
But we have others ways to do that, and we can do it very easily by table value functions.
 -- My base table
CREATE TABLE my_Emplyee
      (ID         INT          NOT NULL IDENTITY(1,1) PRIMARY KEY,
       EMP_NAME   VARCHAR(50)  NULL,
       SAL        DECIMAL(20,2)NOT NULL)
      
-- Inserting so records on it
INSERT INTO  my_Emplyee
             (EMP_NAME,  SAL)
VALUES('Sukamal jana', 40000),
        ('Manisankar Year', 30000),
        ('Tuhin Shina', 40000),    
        ('Sabgram jit', 30000),  
        ('Subrata Kar', 20000),   
        ('Debojit Ganguli', 20000) 
       
-- Display records
SELECT * FROM my_Emplyee
Output
ID    EMP_NAME          SAL
1     Sukamal jana      40000.00
2     Manisankar Year   30000.00
3     Tuhin Shina       40000.00
4     Sabgram jit       30000.00
5     Subrata Kar       20000.00
6     Debojit Ganguli   20000.00
I am again mentioned that the parameters cannot be passed in views.
Now I am moving to the solution of that by table value function and passing some parameters to get the desired result set.
IF OBJECT_ID (N'fn_EMP_VIEW') IS NOT NULL
   DROP FUNCTION dbo.fn_EMP_VIEW
GO
CREATE FUNCTION dbo.fn_EMP_VIEW
                (@p_Sal DECIMAL(20,2))
RETURNS TABLE
AS RETURN
(
      SELECT *
      FROM   my_Emplyee
      WHERE  SAL>=@p_Sal
)
GO            
             
We want to display the employee details that have salary 40,000 or more than that.
-- Execute it
SELECT *
FROM   dbo.fn_EMP_VIEW(40000)
Output
ID    EMP_NAME          SAL
1     Sukamal jana      40000.00
3     Tuhin Shina       40000.00
Can we perform Insert/Update/Delete (DML) statement/Operation  with SQL Server Functions?


No.You Cant

You could get error

 Invalid use of side-effecting or time-dependent operator in 'INSERT/UPDATE/DELETE' within a function.

Example:

create function dbo.getcoursebycidOp(@id int)
returns int
begin
declare @cid int

select @cid= Courseid from dbo.Course
where Courseid=@id

insert into course(courseid,Coursename,CourseAdmin)
select @cid,'Test',@cid+4

return @cid
end

Msg 443, Level 16, State 15, Procedure getcoursebycidOp, Line 9
Invalid use of side-effecting or time-dependent operator in 'INSERT' within a function.

Consider Table Have 4 null values. Can we get 2nd null value from Table?

Yes We can


select * from (select Row_number()over(partition by Testid order by Testid) as [Did],
* from Testnull)as T
where [Did]=2

Can we use Table valued function in Cross Join operation?



Yes. You can also use cross apply  with table valued function.


Cross apply example:

USE AdventureWorks;
SELECT DBID       = st.dbid,
       Query      = LEFT(st.text, 30),    
       ExecCount  = execution_count,
       WorkerTime = total_worker_time,
       PhysReads  = total_physical_reads,
       LogiReads  = total_logical_reads
FROM sys.dm_exec_query_stats qs
   CROSS APPLY sys.dm_exec_sql_text(sql_handle) st
ORDER BY ExecCount desc;
Cross Join Example
CROSS JOIN syntax:
SELECT * FROM [TABLE 1] CROSS JOIN [TABLE 2]
OR
SELECT * FROM [TABLE 1], [TABLE 2]
Table value function:
USE AdventureWorks;
GO
-- SQL Server user-defined function - UDF - Table-valued function - inline TVF
CREATE FUNCTION Sales.fnTopNOrders
               (@CustomerID AS INT,
                @n          AS INT)
RETURNS TABLE
AS
  RETURN
    SELECT   TOP ( @n ) SalesOrderID,
                        ShipDate = convert(CHAR(10),ShipDate,112),
                        TotalDue = convert(VARCHAR,TotalDue,1)
    FROM     AdventureWorks.Sales.SalesOrderHeader
    WHERE    CustomerID = @CustomerID
    ORDER BY TotalDue DESC
GO
SELECT   StoreName = s.Name,
         [Top].ShipDate,
         [Top].SalesOrderID,
-- SQL currency format – format money
         TotalDue = '$' + [Top].TotalDue
FROM     AdventureWorks.Sales.Store AS s
         INNER JOIN AdventureWorks.Sales.Customer AS c
           ON s.CustomerID = c.CustomerID
         CROSS APPLY AdventureWorks.Sales.fnTopNOrders(c.CustomerID,5) AS [Top]
WHERE    CustomerType = 'S'
ORDER BY StoreName,
         convert(MONEY,TotalDue) DESC
GO
Output:
StoreName               ShipDate    SalesOrderID      TotalDue
A Bike Store            20020208    45283             $37,643.14
A Bike Store            20020508    46042             $34,722.99
A Bike Store            20011108    44501             $26,128.87
A Bike Store            20010808    43860             $14,603.74
A Great Bicycle Company 20010908    44125             $3,450.98
A Great Bicycle Company 20020308    45569             $2,828.58
SELECT   StoreName = s.Name,
         [Top].ShipDate,
         [Top].SalesOrderID,
         TotalDue = '$' + [Top].TotalDue
FROM     AdventureWorks.Sales.Store AS s
         INNER JOIN AdventureWorks.Sales.Customer AS c
           ON s.CustomerID = c.CustomerID
         CROSS join AdventureWorks.Sales.fnTopNOrders(5,5) AS [Top]
WHERE    CustomerType = 'S'
ORDER BY StoreName,
         convert(MONEY,TotalDue) DESC

(OR)


But if you use CROSS JOIN as DERIVED TABLES U CAN USE IT

SELECT   StoreName = s.Name,
         [t].ShipDate,
         [t].SalesOrderID,
         TotalDue = '$' + [t].TotalDue
FROM     AdventureWorks.Sales.Store AS s
         INNER JOIN AdventureWorks.Sales.Customer AS c
           ON s.CustomerID = c.CustomerID
         CROSS join (select * From AdventureWorks.Sales.fnTopNOrders(1,5) ) as t 
WHERE    CustomerType = 'S'
ORDER BY StoreName,
         convert(MONEY,TotalDue) DESC

Another derived tables examples of Cross join with Table Valued Function:



select * from dbo.Course
cross join (select * from getcoursebycidtable(3))as t


create function dbo.getcoursebycidtable(@id int)
returns table
as
return
select * from dbo.Course
where Courseid=@id

What is service broker? 
Service Broker is a process of sending and receiving guaranteed, asynchronous messages by using extensions to the Transact-SQL Data Manipulation Language (DML). Messages can be sent to a queue in the same database as the sender, to another database in the same SQL Server instance, or to another SQL Server instance either on the same server or on a remote server.
Service Broker in Microsoft SQL Server is a new technology that provides messaging and queuing functions between instances. The basic functions of sending and receiving messages forms a part of a “conversation.” Each conversation is considered to be a complete channel of communication. Each Service Broker conversation is considered to be a dialog where two participants are involved.
Disadvantages of SQL Server Service Broker in contrast to MSMQ for following criteria.
  1. Development
  2. Troubleshooting
  3. Performance (let's say we need to process 100,000 messages daily, having avg size around 25 KB)
  4. Scalability
Development
Service Broker requires a lot of boilerplate, which is a pain, but unless you're planning on having lots of different queues it's manageable. Sql Server projects in Visual Studio take a lot of the pain of deploying away.
Troubleshooting
Service Broker is a black box - messages go in, and they usually come out, but if they don't then troubleshooting can be problematic, and all you can do is query the system views - and sometimes you simply can't find out what has gone wrong. This is annoying, but MSMQ has the same kind of issues..
Performance
Service Broker performance is excellent. We are processing a lot more than 100,000 messages per day, more than 30,000 per hour at our SLA load, and our message sizes are large. I would estimate we process close to 100,000 messages per hour during heavy load testing.
For best performance I would advise you to use a Dialog Pool like this one as creating a Service Broker dialog can be an expensive operation.
You will also want to use the Error Handling procedures detailed by Remus Rusanu. (If you do use Service broker, you might as well read everything Remus has written on the subject before you start, as you'll end out reading it eventually!)
Scalability
You can certainly use more than one server to scale up if required, though we haven't had to do so, and from the load size you mention I don't think you would need to either.

Difference Between JOIN and UNION:

Union:

1. All select clauses must have same number of expressions.
2. Displaying duplicate rows only one time
Join:
1.Contain data from columns of more than one table include duplicates
2.Tables are joined on Common columns sharing
What is CTE(Common Table Expression) Scope or life or expiration inside SP?
Disposable view
1) It throws Error if it breaks with other statement(Insert/update/delete/any statement other than CTE related)  inside cte block when u use in inside SP
create proc getcteupdate
as
begin
with a
as
(
select emp_id,emp_name,sum(emp_sal) [sumsalupto600] from dbo.Employee_Demo_Audit
where emp_sal between 200 and 600
group by emp_id,emp_name
)
,
b
as
(
select emp_id,emp_name,sum(emp_sal) [sumsalabove700] from dbo.Employee_Demo_Audit
where emp_sal between 700 and 1000000
group by emp_id,emp_name
)
select * from dbo.Employee_Demo_Audit------------------Here u use select statement to break CTE 
select a.emp_id,b.emp_name,a.[sumsalupto600],b.[sumsalabove700]
from a left join b
on a.emp_id=b.emp_id

end
when u created SP it will be Parse it but when u executed it will through Error.
(14 row(s) affected) 208, Level 16, State 1, Procedure , 
Line 24 Invalid object name 'a'.
Thats it.
2)you can refer subquery or cross join but when u can't refer secondtime it throws invalid object name error like select count(*) from cte see below
alter proc getcteupdate as begin with a as ( select emp_id,emp_name,sum(emp_sal) [sumsalupto600] from dbo.Employee_Demo_Audit where emp_sal between 200 and 600 group by emp_id,emp_name ) , b as ( select emp_id,emp_name,sum(emp_sal) [sumsalabove700] from dbo.Employee_Demo_Audit where emp_sal between 700 and 1000000 group by emp_id,emp_name ) select a.emp_id,b.emp_name,a.[sumsalupto600],b.[sumsalabove700] from a left join b on a.emp_id=b.emp_id select count(*) from a end

 208, Level 16, State 1, Procedure , Line 26
Invalid object name 'a'.

thats it


What is Clustered index and Non-Clustered index on SQL Server?
How many can we able to create?

Clustered and Nonclustered Indexes

index is an on-disk structure associated with a table or view that speeds retrieval of rows from the table or view. An index contains keys built from one or more columns in the table or view. These keys are stored in a structure (B-tree) that enables SQL Server to find the row or rows associated with the key values quickly and efficiently.
A table or view can contain the following types of indexes:
  • Clustered
    • Clustered indexes sort and store the data rows in the table or view based on their key values. These are the columns included in the index definition. There can be only one clustered index per table, because the data rows themselves can be sorted in only one order.
    • The only time the data rows in a table are stored in sorted order is when the table contains a clustered index. When a table has a clustered index, the table is called a clustered table. If a table has no clustered index, its data rows are stored in an unordered structure called a heap.
  • Nonclustered
    • Nonclustered indexes have a structure separate from the data rows. A nonclustered index contains the nonclustered index key values and each key value entry has a pointer to the data row that contains the key value.
  Non Index Key Value (Pointer)--------------------->> Data Row------------->Key Value
    • The pointer from an index row in a nonclustered index to a data row is called a row locator. The structure of the row locator depends on whether the data pages are stored in a heap or a clustered table. For a heap, a row locator is a pointer to the row. For a clustered table, the row locator is the clustered index key.
    • You can add nonkey columns to the leaf level of the nonclustered index to by-pass existing index key limits, 900 bytes and 16 key columns, and execute fully covered, indexed, queries. For more information, see Create Indexes with Included Columns.
Both clustered and nonclustered indexes can be unique. This means no two rows can have the same value for the index key. Otherwise, the index is not unique and multiple rows can share the same key value. For more information, see Create Unique Indexes.
Indexes are automatically maintained for a table or view whenever the table data is modified.
What is the maximum number of Index per table? 
For SQL Server 2005: 1 Clustered Index + 249 Nonclustered Index = 250 Index
For SQL Server 2008:
1 Clustered Index + 999 Nonclustered Index = 1000 Index

Maximum Capacity Specifications for SQL Server 2005:

SQL Server 2005 Database Engine objectMaximum sizes/numbers SQL Server 2005 (32-bit)Maximum sizes/numbers SQL Server 2005 (64-bit)
Batch size1
65,536 * Network Packet Size
65,536 * Network Packet Size
Bytes per short string column
8,000
8,000
Bytes per GROUP BY, ORDER BY
8,060
8,060
Bytes per index key2
900
900
Bytes per foreign key
900
900
Bytes per primary key
900
900
Bytes per row8
8,060
8,060
Bytes per varchar(max)varbinary(max),xmltext, or image column
2^31-1
2^31-1
Characters per ntext or nvarchar(max)column
2^30-1
2^30-1
Clustered indexes per table
1
1
Columns in GROUP BY, ORDER BY
Limited only by number of bytes
Limited only by number of bytes
Columns or expressions in a GROUP BY WITH CUBE or WITH ROLLUP statement
10
10
Columns per index key7
16
16
Columns per foreign key
16
16
Columns per primary key
16
16
Columns per base table
1,024
1,024
Columns per SELECT statement
4,096
4,096
Columns per INSERT statement
1,024
1,024
Connections per client
Maximum value of configured connections
Maximum value of configured connections
Database size
524,258 terabytes
524,258 terabytes
Databases per instance of SQL Server
32,767
32,767
Filegroups per database
32,767
32,767
Files per database
32,767
32,767
File size (data)
16 terabytes
16 terabytes
File size (log)
2 terabytes
2 terabytes
Foreign key table references per table4
253
253
Identifier length (in characters)
128
128
Instances per computer
50 instances on a stand-alone server for all SQL Server 2005 editions except for Workgroup Edition. Workgroup Edition supports a maximum of 16 instances.
SQL Server 2005 supports 25 instances on a failover cluster.
50 instances on a stand-alone server.
25 instances on a failover cluster.
Length of a string containing SQL statements (batch size)1
65,536 * Network packet size
65,536 * Network packet size
Locks per connection
Maximum locks per server
Maximum locks per server
Locks per instance of SQL Server5
Up to 2,147,483,647
Limited only by memory
Nested stored procedure levels6
32
32
Nested subqueries
32
32
Nested trigger levels
32
32
Nonclustered indexes per table
249
249
Parameters per stored procedure
2,100
2,100
Parameters per user-defined function
2,100
2,100
REFERENCES per table
253
253
Rows per table
Limited by available storage
Limited by available storage
Tables per database3
Limited by number of objects in a database
Limited by number of objects in a database
Partitions per partitioned table or index
1,000
1,000
Statistics on non-indexed columns
2,000
2,000
Tables per SELECT statement
256
256
Triggers per table3
Limited by number of objects in a database
Limited by number of objects in a database
UNIQUE indexes or constraints per table
249 nonclustered and 1 clustered
249 nonclustered and 1 clustered
User connections
32,767
32,767
XML indexes
249
249
Can we able to create clustered index for column dont have primary key constraints?

No.You can create a clustered index on a column other than primary key column if a nonclustered primary key constraint was specified.

Can you tell me table select column events using SQL Profiler?

SQL: Stmt Completed

To see Table select column list:


To see Standard Events:



Whats is Sparse Columns in SQL 2008?

Sparse Column is one more new feature introduced in SQL SERVER 2008. 
Sparse columns are ordinary columns that have an optimized storage for null values.
Storing a null value in a sparse column doesn’t take any space, but storing a non-null value in sparse column takes 4 bytes extra space than the non-sparse columns of the same data type.
GO
CREATE TABLE SPARSECOLUMNTABLE
(
 col1 int identity(1,1),
 col2 datetime sparse,
 col3 int sparse
)
CREATE TABLE NONSPARSECOLUMNTABLE
(
 col1 int identity(1,1),
 col2 datetime,
 col3 int
)
GO
INSERT INTO SPARSECOLUMNTABLE VALUES(NULL,NULL)
INSERT INTO NONSPARSECOLUMNTABLE VALUES(NULL,NULL)
GO 25000

What is Columnstore indexe in SQL 2012:

Columnstore indexes in the SQL Server Database Engine can be used to significantly speed-up the processing time of common data warehousing queries.

Sample:

CREATE NONCLUSTERED COLUMNSTORE INDEX csindx_simple
ON SimpleTable
(OrderDateKey, DueDateKey, ShipDateKey);


How SQL Server stores data?

data stored in a SQL Server Database is organized in 8192 byte pages. Within each database every page has a unique address made up out of the file number and the page number within that file


What are the wait types and waits in SQL Server?

Wait types:
1)ASYNC_NETWORK_IO - Network
2)BACKUPIO - Backup 3)CXPACKET - Query Used to synchronize threads involved in a parallel query. This wait type only means a parallel query is executing. 4)WAITFOR - Background
5)LCK_M_IXLock
Waits:
    • Resource waits are caused by a particular resource, perhaps a specific lock that is unavailable when the requested is submitted. Resource waits are the ones you should focus on for troubleshooting the large majority of performance issues.
    • External waits occur when SQL Server worker thread is waiting on an external process, such as extended stored procedure to be completed. External wait does not necessarily mean that the connection is idle; rather it might mean that SQL Server is executing an external code which it cannot control. Finally the queue waits occur if a worker thread is idle and is waiting for work to be assigned to it.
    • Queue waits normally apply to internal background tasks, such as ghost cleanup, which physically removes records that have been previously deleted. Normally you don't have to worry about any performance degradation due to queue waits.
    Currently Running Queries to track what happening/currently running  in SQL Server using DMV?
    
    
    sys.dm_exec_sessions 
    sys.dm_exec_requests
    
    
    
    
    How to check Last updated Statistics in Table for SQL Server?

    Display the status of all statistics on a table

    The following displays the status of all statistics on the Product table.
    USE DatabaseName;
    GO
    EXEC sp_autostats 'TableName';
    GO
    (Or)
    The best way to get information about statistics is through the command
    DBCC SHOW_STATISTICS (TableName,indexname)
    for more info:
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    

    List out some of Performance Counters in perfmon in SQL?

    • Transactions/sec
    • Log Cache Hit Ratio
    • Log Cache Reads/sec
    • Log Bytes Flushed/sec
    • Log Flush Wait Time
    • Log Flush Waits/sec
    • Log Flushes/sec
    • Percent Log Used
    
    
    List out Some DMV's in SQL Server:
    
    
    Dynamic Management Views(DMV's):
    Dynamic management views and functions return server state information that can be used to monitor health of a server instance, diagnose problems, and tune performance.
    1)To See Wait type using DMVs:
    1)SELECT DiSTINCT wait_type
    FROM sys.dm_os_wait_stats;
    2)To check Currently Running Query/Statements in SQL Server:
    1)sys.dm_exec_sessions
    2)sys.dm_exec_requests
    3)To check index details using DMV’s:
    sys.dm_db_missing_index_details ----to check missing indexes.
    sys.dm_db_missing_index_columns ----to check missing index columns
    sys.dm_db_index_usage_stats ----to check index usage
    sys.dm_db_index_physical_stats-----to check physical stats
    4)To check I/O related info using DMV’s
    sys.dm_io_pending_io_requests
    5) To check Object Related info using DMV’s
    sys.dm_sql_referenced_entities
    sys.dm_sql_referencing_entities
    6)To see Replication related info using DMV’s:
    sys.dm_repl_traninfo
    sys.dm_repl_schemas
    sys.dm_repl_articles
    7) To check audit related info using DMV’s:
    sys.dm_audit_actions 
    8)To check Transaction related info using DMV’s:
    sys.dm_tran_locks
    sys.dm_tran_database_transactions
    sys.dm_tran_active_transactions
    9)To check Operationg Systems info using DMV’s:
    sys.dm_os_performance_counters
    sys.dm_os_memory_objects
    sys.dm_os_process_memory 
    
    sys.dm_os_waiting_tasks
    
    
    
    
    10) To get Text Plan
    
    
    sys.dm_exec_query_stats 
    sys.dm_exec_sql_text
    
    
    
    
    
    
    Can we able to use DML statements inside UDF?
    
    
    No. You can't able to create DML (INSERT/UPDATE/DELETE) Statements in SQL.
    
    
    
    

    Difference Between Temporary Table and Table Variable:

    
    
    Temporary Table

    Table Variable

    U can able to modify structures of temp table using
    ALTER,DROP,CREATE statements.
    U cant able modify structures of table variable
    Temporary Tables honor the explicit transactions defined by the user.
    Table variables doesn’t participate in the explicit transactions defined by the user.
    Temporary Tables are not allowed in User Defined Functions.
    Sample:
    ALTER function [dbo].[samplfunction]()
    returns varchar(max)
    as
    begin
        declare @intCounter int = 2
        CREATE TABLE #TEMPTABLE(TID INT)
        INSERT INTO #TEMPTABLE
        SELECT @intCounter
        return @intCounter
    end
    O/P:
    Msg 2772, Level 16, State 1, Procedure samplfunction, Line 6
    Cannot access temporary tables from within a function.
    Msg 2772, Level 16, State 1, Procedure samplfunction, Line 8
    Cannot access temporary tables from within a function.
    Table Variables can be used in User Defined Functions.
    Temporary table supports adding Indexes explicitly after Temporary Table creation and it can also have the implicit Indexes which are the result of Primary and Unique Key constraint.
    Table Variables doesn’t allow the explicit addition of Indexes after it’s declaration, the only means is the implicit indexes which are created as a result of the Primary Key or Unique Key constraint defined during Table Variable declaration. Sample implicit index creation: Declare @sample table( Tid int primary key,Tname varchar(300) ) so now u can able to create clustered index
    For more info see
    
    
     http://sqltechi.blogspot.in/2015/04/differences-in-sql-server.html
    
    
    Whats new in SQL 2008?
    
    
    1)Variable declaration allows initialization:
    Prior to Sql Server 2008 to initialize a variable, we needed to first declare the variable and then we can initialize it by using SET/SELECT statement as shown below:
    1
    2
    DECLARE @COUNT INT
    SET @COUNT =100
    Now in Sql Server 2008 Variable declaration allows initialization similar to the one we do in C#. Now instead of writing two statements, we can write a single statement as below:
    1
    DECLARE @COUNT INT =100
    2) Insert multiple rows using single INSERT Statement
    To understand this feature first create an Employee Table by using the below script:
    1
    2
    CREATE TABLE DBO.Employee
    ( Id INT,  Name VARCHAR(50) )
    
    
    Now in Sql Server 2008 we can accomplish the same by writing script like below:
    1
    2
    3
    4
    INSERT INTO dbo.Employee
    VALUES(1,'Basavaraj') ,
           (2,'Shashank') ,
           (3,'Monty')
    3) Arithematic Assignment Operators
    Now Sql Server 2008 also supports the Arithematic Assignment Operators like the below ones:
    Operator Usage            Description
    +=       SET @x+=@y       Same as: SET @x = @x + @y
    -=       SET @x-=@y       Same as: SET @x = @x - @y
    *=       SET @x*=@y       Same as: SET @x = @x * @y
    /=       SET @x/=@y       Same as: SET @x = @x / @y
    %=       SET @x%=@y       Same as: SET @x = @x % @y
    4)Table-Valued Parameters in Sql Server:
    It provides option for the Client Applications to pass multiple rows of Data to Sql Server.
    Prior to this, if we were needed to pass multiple rows of Data from client application to Sql Server, then we use to model the input data as xml /comma separated values and pass it to the stored procedure and in Stored Procedure convert this xml/comma separated values to a table variable/temporary table.
    You can find detailed information on the Table-Valued Parameters and also on calling Stored Procedure with Table-Valued Parameter from Sql Server and C# .Net Code @http://sqlhints.com/2011/09/09/table-valued-parameters-in-sql-server/
    5) MERGE Statement
    Merge statement is one of the interesting T-Sql enhancements of Sql Server 2008. With Merge statement we can very efficiently perform multiple DML operations like INSERT, UPDATE and DELETE on the target table data based on Source table data and the join condition specified between them.
    You can find detailed information on MERGE Statement @http://sqlhints.com/2011/09/24/merge-statement-in-sql-server-2008/
    6) Sparse Column
    Sparse Column is one more new feature introduced in SQL SERVER 2008. Storing a null value in a sparse column doesn’t take any space, but storing a non-null value in sparse column takes 4 bytes extra space than the non-sparse columns of the same data type.
    You can find detailed information on Sparse Column @http://sqlhints.com/2011/11/12/sparse-columns-in-sql-server-2008/
    7) Date and Time Data Types
    Date, Time, DateTime2 etc are the new date and time data type introduced in SQL SERVER 2008. You can find detailed information on DateTime2 datatype in the article Difference between DateTime and DateTime2 DataType.
    8) As SP_Depends results were not always reliable/accurate/correct. To resolve this in SQL SERVER 2008 following two DMV’s are introduced sys.dm_sql_referencing_entities sys.dm_sql_referenced_entities
    9) Filtered Indexes
    Filtered Index (i.e. Index with where clause) is one of the new feature introduced in Sql Server 2008. It is a non-clustered index, which can be used to index only subset of the records of a table. As it will have only the subset of the records, so the storage size will be less and hence they perform better from performance perspective compared to the classic non-clustered indexes.
    For detailed information on filtered index you can go through the article A-Z of Filtered Indexes with examples in Sql Server
    Below is an example Filtered Index Creation Script:
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE NONCLUSTERED INDEX IX_Emplyoee_EmployeeId
     ON Employee(EmployeeId) WHERE EmployeeId > 500
    GO
      10)Policy Based Management (PBM) introduced  11)Central Management Server(CMS) is Introduced
    12) Backup with Compression is introduced
    
    
    Whats new in SQL 2012?
    
    
    SEQUENCE OBJECT
    THROW
    More Options
    CONVERSION FUNCTIONSSTRING FUNCTIONS
    LOGICAL FUNCTIONSDATE AND TIME FUNCTIONS

    Columnstore index in SQL Server 2012

    Microsoft introduced the columnstore index in SQL Server 2012 to provide significantly improved performance for data warehousing types of queries. Microsoft states that for some types of queries, columnstore indexes can provide up to 10x performance improvements
    Unlimited Concurrent Connections introduced in SQL 2012
      In SQL 2008 Concurrent Connections  was 32767

     What are the Difference between SQL server 2008 R2 (10.5) and SQL Server 2012 (11)?

    Difference between SQL Server 2008 R2 and SQL Server 2012
    
    SQL Server 2008 R2 (SQL Server 10.5) :
    
    
    1.SQL Server 2008 R2 is codenamed as Kilimanjaro
    
    2.In SQL Server 2008 R2 , rebooting is requisite for OS patching , hence server down time is high
    
    3.SQL Server 2008 R2 does not have this feature of availability groups, hence fast recovery is not possible.
    
    4.The SQL Server 2012 uses 48 bit precision for spatial calculations
    
    5.CONCAT(), FORMAT() and TRY_CONVERT() functions are not available in SQL Server 2008
    
    6.SQL Server 2008 R2 is slow compared to SQL Server 2012.
    
    7.However buffer rate is less because there is no data redundancy in SQL Server 2008 R2
    
    8.Data visualization is not supported in SQL Server 2008 R2
    
    9.Spatial features are not supported more in SQL Server 2008 R2. Instead a traditional way for geographical elements have been set in SQL Server 2008 R2.
    
    10.The Maximum number concurrent connections to SQL Server 2008 is 32767.
    
    11. SQL Server have old CONVERT() function for convert date format. for 
    that ween need to memorizing cryptic style codes like 101 and 103 for 
    converting datetime values to localized presentation formats
    
    CONVERT(DATETIME, '7/24/2010 12:00:00 AM', 101)
    
    12. There is no direct way to cast a money/currency field in SQL Server 2008
    
    
    
    SQL Server 2012 (SQL Server 11) :
    
    
    
    1.SQL Server 2012 is codenamed as Denali
    
    2.In SQL Server 2012, server down time is reduced by 50% , hence OS patching is not rebooting n times.
    
    3.In SQL Server 2012, high availability and disaster recovery factor has
     been introduced which duplicates the data and rapidly recovers the 
    loss.
    
    4.The SQL Server 2012 uses 48 bit precision for spatial calculations
    
    5.CONCAT(), FORMAT() and TRY_CONVERT() functions are newly included in SQL Server 2012
    
    6.In SQL Server 2012, the performance is 10 times faster than the predecessor.
    
    7.Buffer rate is high in SQL Server 2012 because of data compression.
    
    8.Data visualization tool is available in SQL Server 2012.This allows snapshots of data.
    
    9.Support for persistent computed columns and extra geographical approach is possible with spatial features in SQL Server 2012.
    
    10.SQL server 2012 has unlimited concurrent connections.
    
    
    
    11. SQL Server 2012 come up with new function FORMAT() same as .format() C# function for change the date/currency format etc. 
    
    This example displays the date in Taiwan format. Taiwan uses traditional Chinese characters.
    
    DECLARE @date DATETIME = '12/21/2011';
    
    SELECT FORMAT ( @date, 'MMMM dddd d', 'zh-TW' ) AS FormattedDate;
    
    
    
    In US format
    
    DECLARE @date DATETIME = convert(datetime,'2011/01/01 2:00:00');
    
    SELECT FORMAT ( @date, 'yyyy/MM/dd hh:mmss tt','en-US' ) AS FormattedDate;
    
    Result
    
    FormattedDate
    
    2011/01/01 02:00:00 AM 
    

     Difference Between 2012 and 2014 SQL Server Enterprise Edition?

    1. Memory Support: The earlier versions of Sql Server (i.e. 2008 R2 and 
    2012 only support 64 GB. On 2014 the memory support is increased to 128 
    GB of memory.
    
    
    2. Indexes: In 2012 columnstore indexes was introduced which was a great
     help to datawarehouse workloads. But they were limited to non clustered
     indexes, supported very few number of datatypes. But in 2014 this 
    feature was expanded to support clustered columnstore indexes and many 
    of the datatypes restrictions was removed. 
    
    
    3. Buffer Pool Extension: Use of fast SSD drives that can really prove 
    as a perfect substitutes for memory. This proves to be very grateful when the server does not support more addition to RAM memory and database size exceeding the amount of RAM.
    
    
    4. The New Cardinality Estimator: The introduction to this estimator 
    which is great at fixing a lot of queries and providing better plans 
    without re-writing the queries themselves.
    
    
    5. Always on Availability Groups: SQL Server 2014's AlwaysOn 
    Availability Groups has been enhanced with support for additional 
    secondary replicas and Windows Azure integration 
    
     
    
    
    Minimizing Deadlocks/Avoid Deadlocks in SQL Server?
    
    
    Deadlock:
    which occurs when the first process is waiting for the resource that is locked by the second process and the second process, is waiting for the resource that is locked by the first
    process.
    
    
    Database Normalization
    Bad database design is the priamry cause of deadlocks. As a Database Developer
    or DBA, we must ensure that our databases are properly normalized because it will reduce the likelihood of deadlock to occur
    Keep Transactions Small
    breakdown the one large transaction in to several small transactions and then execute these transactions in batches.
    .Reduce Transactions Time
    Reduce the transaction time by making sure that you are not performing the same reads over and over again. If your application needs to read the same data more than once, then cache the data into variables, temporary tables or table variables
    
    
    Controlling Lock Escalation
    If applicable, use ROWLOCK or PAGLOCK to control of the lock escalation
    transaction locks in SQL Server consumes memory resources and as the number of locks increases, the 
    memory decreases
    
    
    Consider Using NOLOCK Hint
    if our table’s only stores historical data and these tables are updated only once a day and queried frequently during the day, then NOLOCK would be a better option. it gives Dirty Records that means it did nt give 
    latest records
    
    
    
    
    What is FILL FACTOR in SQL Server?
    
    
    Fill Factor:
    How much Percentage of Free space per data page should be left available in the index
    when it is first created or the next time it is rebuilt.
    For instance, with a fill factor of 75, about 25 percent of every data page is left empty.
    Fill factor controls the amount of free space left on the data pages of an index after an index is built or defragmented.
    This free space is made available to allow for records on the page to expand with the risk 
    that the change in record size may result in a page split. This is an extremely useful property 
    of indexes to use for index maintenance.
    FILLFACTOR Defines the amount of empty space to leave in each
    data page of an index when it is created. This is only applied at the time an index is created or rebuilt.
    The fill-factor option is provided for fine-tuning index data storage and performance.
    When an index is created or rebuilt, the fill-factor value determines the 
    percentage of space on each leaf-level page to be filled with data, reserving the 
    remainder on each page as free space for future growth. For example, specifying a 
    fill-factor value of 80 means that 20 percent of each leaf-level page will be left empty, providing space for index expansion as data is added to the underlying table.
    The empty space is reserved between the index rows rather than at the end of the index.
    The fill-factor value is a percentage from 1 to 100, and the server-wide default is 0 which
    means that the leaf-level pages are filled to capacity.
    Why do we need to update statistics & Can any one explain what would be happen background when we updated statistics of table?
    Out-of-date or inaccurate statistics can lead to execution plans that do not match the data in the table. Without proper execution plans, performance will suffer regardless of the indexes in place.
    query optimizer develops cost-based execution plans to find the best execution plan for executing the submitted request. If the statistics for an index are incorrect or considered out of date, then the plan that is created will, likely, likewise be inefficient.
    when Statistics updated Correctly then query optimizer requires in order to build cost-based query plans.
    What is overlapping index in SQL Server?
    Index have column repeated more than once that is called overlapping index
    CREATE TABLE TestIndexes(Col1 INT, col2 VARCHAR(10), Col3 DATE, Col4 BIT); CREATE INDEX ix_TestIndexes_Col1 ON dbo.TestIndexes (col1); CREATE INDEX ix_TestIndexes_Col1_Col2 ON dbo.TestIndexes (col1, col2); CREATE INDEX ix_TestIndexes_Col1_Col2_Col3 ON dbo.TestIndexes (col1,col2, Col3);
    Here Col1,Col2 are repeated so it is called overlapping index
    
    
    What a way you can find Missing Indexes in SQL SERVER who DMV's are 
    not accurate?
    
    
    To check Missing index details using DMV’s:
    sys.dm_db_missing_index_details ----to check missing indexes.
    sys.dm_db_missing_index_columns ----to check missing index columns
    sys.dm_db_index_usage_stats ----to check index usage
    It gives some time not accurate or we have to search it via where clause
    
    
    so here u can cultivate using Database Engine Tuning Advisor(DTA) is the solution
    
    
    DTA can provide are:
    
    • Recommend the best missing of indexes for databases by using the query optimizer to
    analyze queries in a workload.
    • Recommend aligned or nonaligned partitions for databases referenced in a workload.
    • Recommend indexed views for databases referenced in a workload.
    • Analyze the effects of the proposed changes, including index usage, query distribution
    among tables, and query performance in the workload.
    • Recommend ways to tune the database for a small set of problem queries.
    
    
    How to Prevent Fragmentation?
    
    
    primary maintenance issue that can lead to a degradation of index performance is index
    fragmentation
    following operation index fragmentations would happen
    • INSERT operations
    • UPDATE operations
    • DELETE operations
    • DBCC SHRINKDATABASE operations
    
    
    Following three methods to prevent fragmentation/mitigate fragmentation
    1. fill factor
    2. data type
    3. default values.
    
    
    Fill factor is an option that can be used when building or rebuilding indexes. This property is used to determine
    how much space per page should be left available in the index when it is first created or the next time it is rebuilt.
    For instance, with a fill factor of 75, about 25 percent of every data page is left empty.
    
    
    Data Type:
    
    
    A good example of bad data typing comes from a previous experience with a data warehouse. The original design for one of the tables included a column with the data type of VARCHAR(10). The column was populated with dates in the format of yyyymmdd, with values similar to “20120401”. As part of the import process, the date
    values were updated into a format of yyyy-mm-dd. When the import was moved to production and millions of rows were being processed at a time, the increase in the length of the column from 8 to 10 characters led to an astounding level of fragmentation due to page splits. Resolving the problem was a simple as changing the data type of the column from 
    VARCHAR(10) to CHAR(10).
    
    
    Default Values
    
    
    NEWID() function is that generating the GUID is not a sequential process using this 
    function to generate values for the clustered index key can lead to severe levels of 
    fragmentation.
    An alternative to the NEWID() function is the NEWSEQUENTIALID() function.
    
    
    The values will be sequential and the amount of fragmentation encountered will be much lower and less frequent.
    
    
    
    
    What types of Index Maintenance following in ur organization?
    
    
    1)Maintenance Plans like Reorganize Index Task,Rebuild Index Task,Statistics update
    
    
    Reorganize Index:
    Data pages in the index are reordered across the pages already allocated to the index.
     After the reorganization is complete, the physical order of pages in an index match the 
    logical order of pages.
    In general, indexes fragmented less than 30 percent are reorganization candidates.
    
    
    Rebuild Index:
    Rebuilding an index builds a new contiguous copy of the index
    When the new index is complete, the existing index is dropped
    indexes with more than 30 percent fragmentation are considered good candidates for index rebuilds.
    
    
    Statistics update:
    
    
    when Statistics updated Correctly then query optimizer requires in order to build
     cost-based query plans.
    
    
    updates statistics as necessary to improve the query plan like Execution plan
    
    
    following three method to update statistics on table
    
    
    
    
    1)EXEC sp_updatestats;
    2)DBCC SHOW_STATISTICS ( 'Sales.SalesOrderDetail’
    , PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID )
    3)sys.stats and sys.stats_columns
    
    
    2)T-SQL Script
    
    
    • Reorganize indexes with less that 30 percent fragmentation.
    • Rebuild indexes with 30 percent or more fragmentation.
    • Ignore indexes that have less than 1,000 pages.
    • Use online rebuilds wherever possible.
    • If the clustered index is being rebuilt, rebuild all indexes in the table.
    
    
    3)Database Engine Tuning Advisor(DTA)
    
    
    Tune database as per suggestion of DTA
    
    
    
    
    
    
    
    
    
    

    How to achieve this as comma seperated Value in SQL using XML METHOD?

    There are many scenarios where we have to show the values of column value as comma (or any other special character) separated string in result set.
    This can be easily achieved using XML path method in SQL.

    Let's take an example of ProductDetails Table:

    SELECT *  FROM  ProductDetails
    Now suppose we want to fetch the list of products along with available colors where multiple colors are shown as comma separated string.
    SELECT DISTINCT p.ProductName,
      STUFF((SELECT distinct ',' + p1.[Color]
             FROM ProductDetails p1
             WHERE p.ProductName = p1.ProductName
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)')
            ,1,1,'') Color
    FROM ProductDetails p;

    How to get Comma Separated Values (CSV) from Table Column Normal Method?

    Example 1:
    How to achieve using STUFF FUNCTION FOR COMMA SEPARATE VALUE?
    USE AdventureWorks
    GO
    -- Check Table Column
    SELECT [Name]
    FROM HumanResources.Shift
    GO
    -- Get CSV values
    SELECT STUFF(
    (SELECT ',' + s.Name
    FROM HumanResources.Shift s
    ORDER BY s.Name
    FOR XML PATH('')),1,1,'') AS CSV
    GO
    stuffsub SQL SERVER   Comma Separated Values (CSV) from Table Column   Part 2

    Example 2:
    How to achieve using SUBSTRING FUNCTION FOR COMMA SEPARATE VALUE?
    USE AdventureWorks GO-- Check Table ColumnSELECT NameFROM HumanResources.Shift GO-- Get CSV valuesSELECT SUBSTRING( (SELECT ',' s.NameFROM HumanResources.Shift sORDER BY s.NameFOR XML PATH('')),2,200000AS CSV GO
    I consider XML as the best solution in terms of code and performance..
    csvxml SQL SERVER   Comma Separated Values (CSV) from Table Column
    Example 3: How to achieve using COALESCE FUNCTION FOR COMMA SEPARATE VALUE? DECLARE @listStr VARCHAR(MAX)SELECT @listStr COALESCE(@listStr+',' '') + NumberColsFROM NumberTableSELECT @listStr
    
    
    What is difference between Transaction Log Backup and Differential Backup?
    
    
    Transaction Log Backup:
    
    
    A transaction log backup allows you to backup the active part of the transaction.
    
    
     After the transaction log backup is issued, the space within the transaction log can be reused for other processes.  If a transaction log backup is not taken, the transaction log will continue to grow.
    
    
    Differential Backup:
    It records Data changes after Database Full backup
    
    
    differential backups work is that they will backup all extents that have changed since the last full backup.
    
    
    Difference:
    
    
    Transaction log backups give you point-in-time recovery. Differentials do not. A differential is only the full 
    backup plus whatever changes have happened since the full backup and up to the differential.
     They do not include any changes after the differential. But a tlog backup does.
    
    
    Here's an example scenario:
    
    
    12am full backup
    
    12pm diff
    
    12am-11:59pm - tlog backups every 15 minutes
    
    
    System crash at 6:01pm that requires complete recovery.
    
    So You would restore the 12am full backup, the 12pm diff, and then all of the
    tlogs from 12pm until 6pm. If you were able to do a tail backup at 6:01pm or
    after, then you could even restore that.
    
    
    If you don't do tlog backups, then you could only restore full backup + diff,
    which would mean data loss from 12pm until 6pm
    If your database is in the Simple recovery model, you can still use full and differential backups. This does not allow you to do point in time recovery, but it will allow you to restore your data to a more current point in time then if you only had a full backup.
    If your database is in the Full or Bulk-Logged recovery model you can also use differential backups to eliminate the number of transaction logs that will need to be restored.  Since the differential will backup all extents since the last full backup, at restore time you can restore your full backup, your most recent differential backup and then any transaction log backups that were created after the most recent differential backup.  This cuts down on the number of files that need to be restored.
     How will you identify duplicate records in below SQL table?
    How will you identify duplicate records in below SQL table?
    How will you identify duplicate records in a SQL table?
    Answer #1 –  Solution to identify duplicate records in a SQL table
    Solution to identify duplicate records in a SQL table
    Solution to identify duplicate records in a SQL table
    SQL scenario based Interview Question – Try SQL code Yourself Question – DECLARE @Employee as table ( Eidint, Enamevarchar(50) ) insert into @Employee values (1,’john’), (2,’james’), (2,’james’), (3,’krish’), (4,’robert’), (4,’robert’), (5,’clave’) Select * from @Employee Answer – SELECT Eid, Ename, COUNT(Eid) AS NumOccurrences FROM @Employee GROUP BY Eid,Ename HAVING (COUNT(Eid)>1)
    Question #2 –  How to get distinct values from a SQL table without using DISTINCT keyword?
    How to get distinct values from a SQL table without using DISTINCT keyword?
    How to get distinct values from a SQL table without using DISTINCT keyword?
    Answer #2 –  Solution to get distinct values from a SQL table without using DISTINCT keyword
    Solution to get distinct values from a SQL table without using DISTINCT keyword
    Solution to get distinct values from a SQL table without using DISTINCT keyword
    SQL scenario based Interview Question – Try SQL code Yourself Answer – selecteid,ename from emp group by eid,ename
    Question #3 –  How to get alternate rows from a SQL table?
    How to get alternate rows from a SQL table?
    How to get alternate rows from a SQL table?
    Answer #3 A –  1st Solution to get  alternate rows from a table in SQL – using Row Number
    1st Solution to get alternate rows from a SQL table - using Row Number
    1st Solution to get alternate rows from a SQL table – using Row Number
    SQL scenario based Interview Question – Try SQL code Yourself Answer – select row_number() over(order by eid)as rowid,* into #temptab FROM dbo.Emp select * from #temptab where (rowid%2)<>0
    Answer #3 B –  2nd Solution to get  alternate rows from a table in SQL – without using Row Number
    2nd Solution to get  alternate rows from a SQL table - without using Row Number
    2nd Solution to get alternate rows from a SQL table – without using Row Number
    SQL scenario based Interview Question – Try SQL code Yourself Answer – select identity(int,1,1)as rowid,* into #Temptab from dbo.Emp select * from #Temptab where (rowid%2)<>0
    Difference Between 2012 and 2014 SQL Server Enterprise Edition 1. Memory Support: The earlier versions of Sql Server (i.e. 2008 R2 and 2012 only support 64 GB RAM. On 2014 the memory support is increased to 128 GB of memory RAM 2. Indexes: In 2012 columnstore indexes was introduced which was a great help to datawarehouse workloads. But they were limited to non clustered indexes, supported very few number of datatypes. But in 2014 this feature was expanded to support clustered columnstore indexes and many of the datatypes restrictions was removed. 
    Types of SQL Server Replication?
    Types of Replication:
    1)Snapshot Replication -------------------------------------------------------------Very Rarely Data changes
    2)Transactional Replication---------------------------------------------------------Data Changes Frequently
    3)Transactional Replication with updatable Subscriptions----------------both pub and sub can modify
    4)MergeReplication---------------------------------Publishers and Subscripers can modify when connected
    What is MS SQL Server?
    Microsoft SQL Server or MS SQL Server or SQL Server or MSSQL is the most famous Relational Database Management System or RDMS. It is developed by Microsoft Corporation. It supports ANSI SQL and runs on T-SQL.
    What is the initial version of SQL Server? The first version of SQL Server is MS SQL Server 1.0 and released on 1989.
    What is last stable version of SQL Server? Last stable version is SQL Server 2016 and it is released on 2015
    Kindly see more info @
     http://sqlserverversions.blogspot.in/
    http://sqlserverbuilds.blogspot.in/
    What are the Editions of SQL Server?
    The editions of MS SQL server are given bellow:
    • Enterprise
    • Standard
    • Workgroup
    • Express
    What is the default size of the SQL Database? After a fresh installation of SQL Server the default size of mdf file is 3MB and ldf file is 1MB
    What is SQL Server default TCP/IP port? Generally SQL Server runs on port 1433. It is its default port. But we can be change it from the Network Utility TCP/IP properties.
    What are the SQL Server system databases? MS SQL server system database are given bellow:
    • Master The master database contains all the system information for an SQL Server. Such as login accounts, configuration settings, SQL server initialization information, remote server information, ongoing processes, system error messages, tapes and disks available on the system, active locks. It also stores the location of all the other databases. Without it, the SQL server can’t start, the other databases cannot be found.
    • Model The model database is used as the template for all new databases. If we creates some database objects (table, views, SP, Functions, etc) on the model and create any new database then every objects of the model will be available on that newly created database.
    • Msdb The msdb database is used by the SQL Server Agent for scheduling alerts and jobs. It contains operational records such as database backup and restores history. If anyone doesn’t want to work with jobs, alerts, log shipping, etc, he/she can easily ignore it.
    • Tempdb The tempdb holds the intermediate results of query processing and sorting. It is dropped automatically when the SQL server is restarted.
    • Distribution The distribution database is used for replication purpose. It is stored as a system database on the Distributor server. It does not store any user table. It only stores the replication history information, transactions, snapshot jobs, synchronization status.
    • Resource The resource database is a hidden system database. It was introduced with SQL server 2005. It holds the system objects. Normally It isn’t possible to see the resource database. But we can see the data file from the location C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn.
    What are the authentication modes in SQL Server? How can it be changed? Windows Mode and Mixed Mode – SQL and Windows. To change authentication mode open SQL Server Management Studio. Write click the server and select properties. Click Security tab and changed Server authentication according to your requirements.
    What is a query execution plan? The query execution plan is a useful tools offered by SQL Server. It describes the steps and order of a SQL query during executions. After that a DBA can decide which part of the query is slow. SQL Server can create execution plans in two ways: Actual Execution Plan – (CTRL + M) – It is created after execution of the query and contains the steps that were performed Estimated Execution Plan – (CTRL + L) – It is created without executing the query and contains an approximate execution plan
    What is SQL Profiler? SQL Profiler is a graphical tool that is available with SQL Server that allows system administrators to monitor events in an instance of Microsoft SQL Server. It can perform various significant tasks such as tracing what is running under the SQL Server Engine’s, and finding out how queries are resolved internally and what scripts are running to accomplish any SQL command. We can start SQL profiler by using the following two methods: Start >> All Programs >> Microsoft SQL Server 2008 >> Performance Tools >> SQL Server Profiler SQL Server Management Studio >> Go to Tools >> SQL Server Profiler
    What is SQL Server Agent? The SQL Server Agent is one kind of service that helps us to configure scheduled tasks and system alerts. It runs continuously in the background as a Windows Service.
    What is Log Shipping? Log shipping is the automated backup system of database and transaction log files, and restoring them onto a standby server. Only Enterprise Editions supports log shipping. In log shipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server is down, the other server will have the same db and can be used this as the Disaster Recovery plan. The key feature of log shipping is that it will automatically backup transaction logs throughout the day and automatically restore them on the standby server at defined interval.
    What is the WITH CHECK OPTION clause for a view? In a view the WITH CHECK OPTION prevents data modifications that do not confirm to the WHERE clause of the view. It allows data to be updated via the view, only if it belongs in the view.
    What are DBCC commands? The Database Consistency Checker (DBCC) provides a set of commands to maintain databases (validation, and status checks). Example is given bellow: DBCC CHECKALLOC  It check disk allocation consistency. DBCC OPENTRAN  It display information about recent transactions. DBCC HELP It display help for DBCC commands.
    What is BCP? When does it used? BulkCopy or BCP is a tools used to copy huge amount of data from tables and views. BCP does not copy the structures same as source to destination. In SQL BULK INSERT command helps us to import a data file into a database table or view in a user-specified format.
    What is a Database Lock? Database lock tells a transaction, if the data item is currently being used by other transactions. It is two types:
    1. Shared Lock It allow only read operation
    2. Exclusive Lock It don’t allow read or write operation
    What does the NOLOCK QUERY hit do? When we use the NOLOCK query hint it inform the storage engine that we want to access the data no matter if the data is locked (like using transaction) by another process or not. It makes the select statement faster. But newly data may miss. Microsoft advises to use READUNCOMMITTED instead of NOLOCK. Sample example is given bellow:
    SELECT * FROM Students WITH (NOLOCK)
    What are DMVs? Dynamic management views (DMVs) returns the health information of a server instance. If is introduced in SQL Server 2005. Two examples are given bellow:
    SELECT * FROM sys.dm_os_wait_stats;
    SELECT * FROM sys.dm_exec_sessions;
    
    What is a CTE? A common table expression or CTE is a temporary result set that can be used within other statements (SELECT, INSERT, UPDATE, and DELETE). It is not stored as an object and lasts only for the duration of the query. It is defined using the WITH statement. A sample example is given bellow:
    WITH myCTE as
    (
          SELECT  
                      CollectionID
                      ,ROW_NUMBER() OVER(ORDER BY CollectionID) RowID
          FROM TestDB.dbo.Collections
    )
    SELECT * from myCTE
    WHERE RowID between 5 and 10
    
    When to use Common table expression (CTE)? A CTE can be used for the following purpose:
    • Recursive query
    • Instead of view when we don’t want to store the definition of views
    • Reference the same table multiple times in the same statement
    What is data warehouse? The data warehouse or data warehousing is a centralized storage of large amount of data and used for reporting and data analysis purpose. Traditional database systems are operational or transactional oriented. But data warehouse are subject/business oriented. Warehouse data must be secured, reliable, easy to retrieve and easy to manage.
    comparison on SQL temp table and table variable. SQL offer four types of table structure to store data. They are:
    • Local temporary table (#table_name)
    • Global temporary table (##table_name)
    • Permanent table (table_name)
    • Table variable (@table_name)
    Local Temporary Table (#table_name) Local temporary tables are created in the system database tempdb and populated on disk. Local temporary tables are visible only to the connection that creates it, and are destroyed or deleted when the connection is closed. But we can manually clean up the data by using: DROP TABLE #temp. One of the main benefits of using this table is that it reduces number of locking and loggin. One problems of using this table is that if we don’t use ROLLBACK in transaction then it may create problems. An example of local temporary table is given bellow:
    CREATE TABLE #temp(
     ID INT,
     Name NVARCHAR(50)
    )
    
    Global Temporary Table (##table_name) Global temporary tables are visible to everyone or all session, and are deleted when all connections that have referenced them are closed.Global temporary tables function much like local temporary tables. It reduces number of locking and logging. An example of global temporary table is given bellow:
    CREATE TABLE ##temp(
     ID  INT,
     Name NVARCHAR(50)
    )
    
    Permanent table (table_name) Permanent tables are visible to everyone, and are deleted when the server is restarted.A permanent table is created in the local database. Also we can choose different database or server to create permanent table. An example of permanent table is given bellow:
    CREATE TABLE temp(
     ID  INT,
     Name VARCHAR(50)
    )
    
    Table Variable (@table_name) Table variables are visible only to the connection that creates it and are deleted when the batch or stored procedure ends. A table variable is created in memory and its performance is higher than #temp tables. Because, it has less locking and logging. It can have a primary index. It is suitable for small amount data. Table variables are only allowed in SQL Server 2000+. We can’t truncate a table variable.It has following limitations:
    • Table variables can not have Non-Clustered Indexes. Only Clustered indexes can be created
    • We can’t create constraints in table variables
    • We can’t create default values on table variable columns
    • We can’t create the statistics against table variables
    An Example of table variable is given bellow:
    DECLARE @temp TABLE(
     ID  INT,
     Name VARCHAR(50)
    )
    
    Temp Table VS Table Variable The temp tables and table variables are not same. They have some differences. The difference between temp tables and table variables are given bellow: #temp table
    1. Stored data in system hard disk
    2. It is slower than @table variable
    3. Suitable for large amount of data
    4. Can have indexes
    @table variable
    1. Stored data in system ram
    2. It is faster than #temp table
    3. Suitable for small amount of data, it depends on system ram size
    4. Can only have a primary index
    The use of temp table and table variables can improve the performance of our applications. One should have the accurate knowledge about on SQL temporary table and table variable.
    for more info visit http://cybarlab.com/difference-between-temp-table-and-table-variable
    What is Cloud Computing? Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). Here some computers are connected according to grid system.
    Cloud Computing
    History of Cloud Computing The idea of Cloud Computing was first come in 1960. But it was implemented in 20th century. In 2006 Elastic Compute Cloud (EC2) was implemented by amazon.com.
    Examples of Cloud Computing Different types of cloud computing services are available today. Some providers offer cloud computing services totally free while others require a paid subscription. Some famous cloud services are given bellow:
    • Amazon Elastic Compute Cloud (EC2) – virtual IT
    • Google App Engine – application hosting
    • Google Apps – software as a service
    • Apple MobileMe – network storage
    Types of Cloud Computing Bases on their service Cloud Computing can be divided in 3 types:
    1. Infrastructure-as-a-Service (IaaS)
    2. Platform-as-a-Service (PaaS)
    3. Software-as-a-Servoce (SaaS)
    Cloud Computing types
    Infrastructure-as-a-Service (IaaS) The IaaS provider offer computer (physical or virtual machine) and other recourses. User can install OS, software etc according to his choice. It is like a new PC.
    Example- Amazon Elastic Compute Cloud (EC2)
    Platform-as-a-Service (PaaS) The PaaS provider offer Computing Platform including OS, API, Database, and Web Server. Programmer can develop his application and run without hardware & software cost.
    Example- Google App Engine, Windows Azure Compute, Force.com
    Software-as-a-Servoce (SaaS) The SaaS provider offer application software. They install & manage it. Client only uses them.
    Example- Google Apps, Microsoft Office 365, Onlive, GT Nexus, Marketo, and TradeCard.
    for more info http://cybarlab.com/cloud-computing
    What are the latest updates available for each version?
    VersionMost recent service packLatest Cumulative UpdateComplete version information
    SQL Server 2014SQL Server 2016  SQL Server 2014 SP1 (12.0.4100.1 – May 2015)CTP3.2 CU 3 for 2014 SP1(12.0.4427.24 – October 2015)  CU10 for 2014 RTM(12.0.2556.0 – October 2015)CTP3.2 SQL Server 2014 builds
    SQL Server 2012SQL Server 2012 SP3 (11.0.6020.0 – November 2015) CU 9 for 2012 SP2 (11.0.5641.0 – November 2015) CU16 for 2012 SP1(11.0.3487.0 - May 2015)SQL Server 2012 builds
    SQL Server 2008 R2SQL Server 2008 R2 SP3(10.50.6000.34 – September 2014)  Note Latest and final release for this versionNoneSQL Server 2008 R2 builds
    SQL Server 2008SQL Server 2008 SP4 (10.0.6000.29 – September 2014) Note Latest and final release for this versionNoneSQL Server 2008 builds
    SQL Server 2005SQL Server 2005 SP4 (9.00.5000.00 – December 2010)
    
    
    
    

    Advantages and disadvantages of using SQL Server database mirroring

    Using SQL Server database mirroring has multiple benefits: a built-in SQL Server feature, relatively easy to set up, can provide automatic fail over in high safety mode, etc. Database mirroring can be combined with other disaster recovery options such as clustering, log shipping, and replication
    Database mirroring will be removed from SQL Server in future versions in favor of AlwaysOn Availability Groups. Also, database mirroring is per database only solution, which means that logins and jobs from the principal SQL Server must be manually recreated on the mirror. There is also possibility for delay, which can only be reduced with better hardware

    How to disable database mirroring for a particular database?

    Explain how to disable database mirroring for a particular database.

    The database owner can manually stop a database mirroring session by removing mirroring from the particular database. Steps to do so:
    • Connect to the Server instance
    • Execute the statement  ALTER DATABASE Db Name SET PARTNER OFF
    Give Create table syntax for Specific File Group?

    CREATE TABLE Table name

    (ID int,Column Name var char(500))

    on File Group Name-------------------------------------------Important to mention file group Name

    Explain in Detail
    ALTER DATABASE db name
    ADD FILE GROUP File_group_Name
    After added new file group kindly check it
    CREATE TABLE Table Name
    (ID int,TTG nvarchar(500))
    on File_Group_Name---------------------------------------------------Important to mention file group
    After created table with Specific file group
    SELECT OBJ.OBJECT_ID, OBJ.name AS ObjectName, OBJ.type_desc, PA.index_id, FG.name AS FilegroupName FROM sys.filegroups FG INNER JOIN sys.allocation_units AU ON AU.data_space_id = FG.data_space_id INNER JOIN sys.partitions PA ON PA.partition_id = AU.container_id INNER JOIN sys.objects OBJ ON OBJ.object_id = PA.object_id WHERE OBJ.OBJECT_ID = (SELECT OBJECT_ID(N'dbo.TestTableFGonADDFileGroupTest'));
    
    
    if you without mention any file group then it will go to primary file group
    
    
     CREATE TABLE testdefaultFG
    (ID int,testname nvarchar(500),Testchar char(40))
    
    
    What is resource governor?
    Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. 
    We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server.

    What is MAXDOP and why it is important?

    When SQL Server runs on a computer with more than one processor or CPU, it detects the best degree of parallelism, that is the number of processors employed to run a single statement, for each query that has a parallel execution plan. You can use the max degree of parallelism option to limit the number of processors to use for parallel plan execution and to prevent run-away queries from impacting SQL Server performance by using all available CPUs.

    What values should I use for MAXDOP?

    To answer the question: What values should I user for MAXDOP? The answer is: It depends. It depends on the hardware, the environment (OLTP vs. OLAP), the load and so on.
    The default value is 0 (zero) and can be set or viewed using (see below pic). A value of 0 means that SQL Server will use all processors if a query runs in parallel.
    Below we can see the current value if we run sp_configure.

    How do I change MAXDOP(MAX Degree Of Parellelism) using SSMS or T-SQL?

    Below shows the T-SQL command that you can run to change the value.  In this example I am changing the value to 4. This means if a query uses a parallel execution plan it will only use four of the available processors.
    Within SSMS, right click on a registered server and select Property.  Then go to the Advanced page as shown below. You can then change this value and click OK to save the value.
    After making this change the value goes into affect immediately, there is not a need to restart SQL Server
    Why do we need to Truncating the Transaction Log in SQL Server?
    If log records were never deleted from the transaction log, the logical log would grow until it filled all the available space on the disks holding the physical log files. At some point in time, old log records no longer necessary for recovering or restoring a database must be deleted to make way for new log records. The process of deleting these log records to reduce the size of the logical log is called truncating the log.
    USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO
    List DMVs(Dynamic Management View) to monitor DEADLOCK in SQL Server
    USE Master
    GO
    SELECT session_id, wait_duration_ms, wait_type, blocking_session_id 
    FROM sys.dm_os_waiting_tasks 
    WHERE blocking_session_id <> 0
    GO
    
    
    
    
    
    What do we call from Replication which transferred from source to destination?
    Articles
    What is Difference Between COALESCE and ISNULL function?
    The COALESCE and ISNULL T-SQL functions are used 
    to return the first nonnull expression among the input arguments
    
    
    SET NOCOUNT ON; USE TSQL2012; -- this database is used in later examples DECLARE   @x AS INT = NULL,   @y AS INT = 1759,   @z AS INT = 42; SELECT COALESCE(@x, @y, @z); SELECT ISNULL(@x, @y);
    When you run this code, both functions return the integer 1759.
    
    
    
    1)COALESCE has over ISNULL is that it supports more than two inputs, whereas ISNULL supports only two. 
    
    
    2)COALESCE is that it's a standard function (namely, defined by the ISO/ANSI SQL 
    standards), whereas ISNULL is T-SQL–specific
    
    
    

    How do we change Database Owner in SQL Server?

    EXEC sp_changedbowner 'sa' ALTER AUTHORIZATION ON DATABASE::YourDatabaseName TO sa

    To recover Point in time on Database what is the recovery model db should be?

    
    DB Recovery model have simple,full,bulk
    
    simple you cant able to take tlog backup
    
    full you can take it full and tlog backup
    
    so answer is full recovery model.
    
    
    
    Types of Upgrade in SQL Server?
    
    
    
    
    In Place
    
    
    Side by Side
    
    
    
    
    In-Place Upgrade
    Just the thought of upgrading any software may send chills down your spine. Fortunately, the process of upgrading to SQL Server 2012 has been greatly improved over the years. SQL Server 2012 has several supported upgrade paths. Therefore, if you are currently running SQL Server on previous versions, you can quickly upgrade to SQL Server 2012 without upgrading to other versions. SQL Server 2005 with SP4 is the oldest version of SQL Server that has a direct upgrade path. If you are running a version older than this, you will need to upgrade to that version before you can perform an in-place upgrade to SQL Server 2012. For example, if you are currently running SQL Server 2000, you must upgrade to SQL Server 2005 with SP4 prior to running an in-place upgrade to SQL Server 2012.
    
    
    
    
    Side-by-Side Upgrade
    This type of upgrade may not stress you as much as an in-place upgrade, simply because the old server remains in place and can be made available quickly in the event of an installation failure. You will follow the same steps as outlined in the "Installing SQL Server from the Setup Wizard" section of this chapter. Once you've completed the steps, you will need to migrate your security, databases, replication configuration, maintenance plans, and any other custom configurations that have been added to your SQL Server installation. This process gives you the advantage of having a stable rollback plan. In the event of an installation failure or some other type of catastrophe, you can always turn the other server back on and continue operations as normal. Figure 2-3 illustrates a side-by-side migration.
    While this strategy offers several advantages, it could require that your organization purchase new hardware. In addition, this method may require that you have disk space that accommodates two identical databases. For organizations with very large databases, this could pose a problem.
    
    
    

    How do We change Owner of Database in SQL Server?

    To see DB Owner
    sp_helpdb 'dbname'
    We change change db owner using SP_Changedbowner 'sa'
    or 
    ALTER AUTHORIZATION ON DATABASE::dbname to sa
    When to use CTE or CTE advantages ?
    1.  Create a recursive query. 
    2.  Substitute for a view when the general use of a  view is not required; that is, you do not have     to store the definition in metadata.
    3.  Reference the resulting table multiple times in  the same statement.
    4.  To find duplicates
    5. CTE does not write anything to the tempdb, hence it has a higher performance compared with the other two options like Table Variable and Table
    Is it possible to disable and enable all the constraints from a table? Answer: Yes Example Disable
    1. alter table Student nocheck constraint all  
    Enable 
    1. alter table Student check constraint all  
    Limitations  
    • Enabing and disabling constraints works only to check constraints and foreign key constraints.
    • Enabing and disabling constraints does not work for default, primary and unique constraints.
    -- Disable all constraints for databaseEXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all" -- Enable all constraints for databaseEXEC sp_msforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"

    How to change the collation settings of your user databases?

    ALTER DATABASE DBName collate SQL_Latin1_General_CP1_CI_AI
    
    How to changes the collation setting on Server level?
    
    
    Step 1 
    
    
    SELECT SERVERPROPERTY(N'Collation')
    
    
    Step 2
    
    
    Backup the settings using select * from sys.configurations
    
    
    step 3
    
    
    Backup Scripts,logins,maintenance plan,operators,jobs from db
    
    
    Step 4
    
    
    Detach all user databases before rebuilding your system databases. If you leave databases attached they will be detached and will be found in the database folder.
    
    
    Step 5
    Now its time to rebuild your system databases. This operation will recreate your master database and all existing settings will be reset. Run the below command from a Windows command prompt. Make sure to run this command from the directory where you have placed your SQL Server setup files. Once you press enter, a separate window will appear to show you the progress bar. Once the rebuild is done, that window will disappear.
    Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=MANVENDRA /SQLSYSADMINACCOUNTS=gourang\hariom /SAPWD= M@nVendr4 /SQLCOLLATION=SQL_Latin1_General_CP1_CI_AI
    
    
    Step 6
    Attach all user databases which were detached in Step 4
    Step 7
    Now change the collation settings of all user databases. It's not necessary to change the collation settings for the user databases, it totally depends on your requirement.
    Step 8
    Now run all of the scripts which were created in Step 3 to restore jobs, alerts, logins, operators, etc...  Also don't forget to change the server level configuration settings which were captured in Step 2.
    What is ANSI-NULLS in SQL Server?
    Specifies ISO compliant behavior of the Equals (=) and Not Equal To (<>) comparison operators when they are used with null values.
    
    
    When SET ANSI_NULLS is ON, a SELECT statement that uses WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement that uses WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name.
    
    
    When SET ANSI_NULLS is OFF, the Equals (=) and Not Equal To (<>) comparison operators do not follow the ISO standard. A SELECT statement that uses WHERE column_name = NULL returns the rows that have null values in column_name. A SELECT statement that uses WHEREcolumn_name <> NULL returns the rows that have nonnull values in the column. Also, a SELECT statement that uses WHERE column_name <>XYZ_value returns all rows that are not XYZ_value and that are not NULL.
    
    
    When SET ANSI_NULLS is ON, all comparisons against a null value evaluate to UNKNOWN. When SET ANSI_NULLS is OFF, comparisons of all data against a null value evaluate to TRUE if the data value is NULL
    
    
    What is ARITHABORT in SQL Server?
    Terminates a query when an overflow or divide-by-zero error occurs during query execution.
    
    
    Which is Better to remove duplicate Distinct or Group by
     Clause?
    
    
    both gives same execution plan but SQL have logical order
    start with From Where Group by having select clauses
    
    
    so I hope Group by clause is better.
    
    
    Can we able to Include db have simple Recovery model in Replication?
    
    
    Yes. We can able to include simple or full or bulk recovery model dbs in replication.
    
    
    What is Tail Log Backup in SQL Server?
    
    
    A log backup taken of the tail of the log just before a restore operation is called a tail-log backup.
    
    
    SQL Server 2005 and later versions usually require that you take a tail-log backup before you start to restore a database. The tail-log backup prevents work loss and keeps the log chain intact.
    
    
    We recommend that you take a tail-log backup in the following situations:
    • If the database is online and you plan to perform a restore operation on the database, before starting the restore operation, back up the tail of the log using WITH NORECOVERY:
      BACKUP LOG database_name TO <backup_device> WITH NORECOVERY
    • If the database is offline and does not start.
      Try to take a tail-log backup. Because no transactions can occur at this time, using WITH NORECOVERY is optional. If the database is damaged, use WITH CONTINUE_AFTER_ERROR, as follows:
      BACKUP LOG database_name TO <backup_device> WITH CONTINUE_AFTER_ERROR
      If the database is damaged, for example, if the database does not start, a tail-log backup succeeds only if the log files are undamaged, the database is in a state that supports tail-log backups, and the database does not contain any bulk-logged changes.
    What is use of Resource DB in SQL server?
    Resource database is a read-only database/Hidden DB that contains all the system objects that are included with SQL Server.
    SQL Server system objects, such as sys.objects, are physically persisted in the Resource database, but they logically appear in the sys schema of every database. The Resource database does not contain user data or user metadata.
     Resource database uses/makes upgrading to a new version of SQL Server an easier and faster procedure. In earlier versions of SQL Server, upgrading required dropping and creating system objects. Because the Resource database file contains all system objects, an upgrade is now accomplished simply by copying the single Resource database file to the local server.
    In Crystal Reports Nested Sub Reports - is it possible?
    No. It is not possible
    Can we Downgrade/Rollback Service pack Upgrades from SQL 2008 SP2 to SQL 2008 SP1?
    Yes. we can.
    You are running on SQL Server 2005 SP3 and because of “End of Support” announced by Microsoft, you decided to apply SP4 over it.  You were able to successfully patch the prod Instance with SP4, but all of a sudden your application breaks and users are screaming and you identified the reason – “patching”. Well,  you decided to fail over to your DR(BCP) site and meanwhile your plan is “rolling back your production Instance to SP3”  so that users and you as a DBA can take a peaceful nap! How do you achieve that??? Unfortunately you’ve to Uninstall your SQL Server and Install your SQL Server 2005 RTM and apply SP3 and restore all of your Databases…and all that hectic work, which really is a very painful process.
    Fortunately, starting SQL Server 2008 MSFT offered us a great enhancement – we can Just Uninstall whatever Update we’ve Installed: Kindly see view installed updates from Control panel)
    Can a Service Pack or Cumulative Update be uninstalled to rolled back in case of failures? We cannot uninstall a service pack or Cumulative Update on SQL Server 2005 or lower, but starting with SQL Server 2008, we can uninstall a service pack or Cumulative Update from control panel add or remove programs – view installed updates window. To rollback a service pack or CU update on SQL Server 2005 or lower, we need to completely uninstall SQL Server and reinstall SQL Server 2005 to same build where it was before applying SP4, also need to have it installed with same collation as it was before. On SQL Server 2005 is installed and brought up to same build as it was before, replace the .mdf and .ldf files of all the system databases or be restoring the backups of all the system databases.
    What are ways of migrating SQL Server from lower version to higher version? If you want to upgrade a SQL Server instance from SQL Server 2008 R2 to SQL Server 2012, below are the different ways you can do this migration.
    1. In-Place Upgrade – In this method, existing instance of SQL Server will be upgraded to higher version, thus we end up with one instance of SQL Server with higher version i.e., SQL Server 2012. Here the instance name remains same, so application connection string remains the same, only change that may be required is to have latest connectivity drivers installed.
    2. Side-By-Side Upgrade – In this method a new instance of SQL Server 2012 is installed on same server or a different server and them all User databases, Logins, Jobs, configuration settings need to be configured or created on the new SQL Server instance.
    What are the differences between In-Place Upgrade and Side-By-Side Upgrade in SQL Server 2008 R2?
    – In In-Place Upgrade, instance name does not change, so no need to change the connection string, but in side-by-side upgrade, instance name will be different if new instance is installed on same server, if installed on other server, then the server name will change and will result in requirement to change to the connection string.
    – In-Place upgrade has risk or additional down time in case the upgrade fails which ends up with cleanup and reinstalling everything clean and during this entire process, there will be huge amount of downtime required. In side-by-side upgrade, we are installing a new instance or even on a new server, so any failures will not affect the existing SQL instance, which will continue to server the clients.
    – Side-by-side migration has lot of addition tasks like backup and restore of user databases on new instance, create logins, fix orphan users, configure SQL Server settings appropriately, Create all the required jobs, etc. In-Place upgrade does not require much changes as everything will be migrated and readily available to use.
    – Rollback of SQL Server instance in in-place method is not possible, but is fully possible in side-by-side upgrade.
    – Amount of downtime is more with in-place upgrade compared to side-by-side upgrade when planned properly.
    Where can I find the SQL Server Setup logs to troubleshoot any setup failures? Setup logs can be found from C:\programfiles\Microsoft SQL Server\110\Setup Bootstrap\Log\Folder with time stamp with latest datetime. change 110 to 100 for SQL Server 2008 R2, 90 for SQL Server 2005, 120 for SQL Server 2014.
    There can be two summary files, one for main setup work flow and other for component update. There is file with name detail.txt which has all the informational, warning and error messages related to setup, this file mostly points to the exact exception or error which caused the setup failure.
    Reviewing summary and details.txt should help in identifying where exactly was the problem.
    Ref:
    http://www.sqlserverf1.com/tag/can-a-service-pack-or-cumulative-update-be-uninstalled-to-rolled-back-in-case-of-failures/
    Can we drop temp table and re create temp table without commit transaction?

    check if temp table exist and drop if it exists before creating a temp table see below

    F OBJECT_ID('tempdb..#Results') IS NOT NULL
        DROP TABLE #Results
    
    CREATE TABLE #Results
    (
        Company             CHAR(3),
        StepId              TINYINT,
        FieldId             TINYINT,
    )
    
    select company, stepid, fieldid from #Results
    
    --Works fine to this point
    
    IF OBJECT_ID('tempdb..#Results') IS NOT NULL
        DROP TABLE #Results
    
    CREATE TABLE #Results
    (
        Company             CHAR(3),
        StepId              TINYINT,    
        FieldId             TINYINT,
        NewColumn           NVARCHAR(50)
    )
    
    select company, stepid, fieldid, NewColumn from #Results
    
    --Does not work
    Because We have to commit transaction then only temp table available 
    begin transaction
    IF OBJECT_ID('tempdb..#Results') IS NOT NULL
        DROP TABLE #Results
    
    CREATE TABLE #Results
    (
        Company             CHAR(3),
        StepId              TINYINT,    
        FieldId             TINYINT,
        NewColumn           NVARCHAR(50)
    )
    
    select company, stepid, fieldid, NewColumn from #Results
    commit
    end 

    To Create Linked Server from User/login What types of Permissions Need in SQL Server?

    Permissions:


    User/Logins Requires ALTER ANY LINKED SERVER permission.
    USE master;
    GRANT ALTER ANY LINKED SERVER TO TestLinkedServerPermissionUser;
    GO
    After login  have above permissions it needs to create linked server to run below script or GUI
     --An example of adding a Linked Server 
    EXEC sp_addlinkedserver      @server='WELCOMEPCMSSQL05',    @srvproduct='',   @provider='SQLNCLI',   
     @datasrc='WELCOME-PC\MSSQL05'
    Additional note:
    sp_addlinkedserver cannot be executed within a user-defined transaction.
    When we upgrade SQL 2005 to SQL 2014 What are the steps to follow?

    when ur going to upgrade follow below steps... 
    a) Take downtime from client 
    b)Inform application team about updowntime means all users not available for accessing dbs
    c)Ensure that space available destination server...If it is fine..
    d)backup all system db and user dbs....from source server
    e)restore backups from soure server to destination server with no recoverery.
    f)check weather any failed backup occures ...If occures fix it
    g)Transfer the logins from source to destination using SP_revlogin script
    h)Run destination serrver Execute sp_revlogin script,then all logins transfered from source to destination.
    i)Solve Orphan users from sp_change_users_login script
    j)R
    un the DBCC CHECK DB command..
    k)
    Run the update Statistics .....
    why do primary key is essential in transactional replication?
    In transactional replication, the primary key is used by the replication mechanisms to find out which rows were changed.
    Can we roll back changes of Security Patches upgrade in SQL Server?
     If you find a patch in Installed Updates in Control Panel, you can probably uninstall it from there.
    Which is faster Distinct or Group by while we remove duplicates?
    SELECT DISTINCT columnname from Tablename
    SELECT COLUMNNAME1,COLUMNAME2
    FROM TABLENAME
    GROUP BY COLUMNNAME1,COLUMNAME2
    SQL Process of logical order for starting from 
    FROM
    WHERE
    GROUP BY 
    HAVING
    SELECT 
    so I hope Group by is more faster than Select Clause.
    The main statement used to retrieve data in T-SQL is the SELECT statement. Following are
    the main query clauses specified in the order that you are supposed to type them (known as
    “keyed-in order”):
    1. SELECT
    2. FROM
    3. WHERE
    4. GROUP BY
    5. HAVING
    6. ORDER BY
    But as mentioned, the logical query processing order, which is the conceptual interpretation
    order, is different. It starts with the FROM clause. Here is the logical query processing
    order of the six main query clauses:
    1. FROM
    2. WHERE
    3. GROUP BY
    4. HAVING
    5. SELECT
    6. ORDER BY