Counter of Festivals

Ashok Blog for SQL Learners and Beginners and Experts

Tuesday, 13 June 2017

SQL Interview Questions Part 2

SQL Interview Questions Part 2

kindly see the SQL Interview Questions Part 1 as my below link

http://sqltechi.blogspot.in/2012/02/interview-questions-in-sql-2005-and.html


1)If you run DBCC CHECKDB what runs in background?

Checks the logical and physical integrity of all the objects in the specified database by performing the following operations:

 DBCC CHECKDB is supported on databases that contain memory-optimized tables but validation only occurs on disk-based tables

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database.
    This means that the DBCC CHECKALLOC, DBCC CHECKTABLE, or DBCC CHECKCATALOG commands do not have to be run separately from DBCC CHECKDB
Sample output

DBCC results for 'model'. Service Broker Msg 9675, Level 10, State 1: Message Types analyzed: 13. Service Broker Msg 9676, Level 10, State 1: Service Contracts analyzed: 5. Service Broker Msg 9667, Level 10, State 1: Services analyzed: 3. Service Broker Msg 9668, Level 10, State 1: Service Queues analyzed: 3. Service Broker Msg 9669, Level 10, State 1: Conversation Endpoints analyzed: 0. Service Broker Msg 9674, Level 10, State 1: Conversation Groups analyzed: 0. Service Broker Msg 9670, Level 10, State 1: Remote Service Bindings analyzed: 0. DBCC results for 'sys.sysrowsetcolumns'. There are 630 rows in 7 pages for object 'sys.sysrowsetcolumns'. DBCC results for 'sys.sysrowsets'. There are 97 rows in 1 pages for object 'sys.sysrowsets'. DBCC results for 'sysallocunits'. There are 195 rows in 3 pages for object 'sysallocunits'. There are 0 rows in 0 pages for object "sys.sysasymkeys". DBCC results for 'sys.syssqlguides'. There are 0 rows in 0 pages for object "sys.syssqlguides". DBCC results for 'sys.queue_messages_1977058079'. There are 0 rows in 0 pages for object "sys.queue_messages_1977058079". DBCC results for 'sys.queue_messages_2009058193'. There are 0 rows in 0 pages for object "sys.queue_messages_2009058193". DBCC results for 'sys.queue_messages_2041058307'. There are 0 rows in 0 pages for object "sys.queue_messages_2041058307". CHECKDB found 0 allocation errors and 0 consistency errors in database 'model'. DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Checking both the current and another database

-- Check the current database. DBCC CHECKDB; GO -- Check the AdventureWorks2012 database without nonclustered indexes. DBCC CHECKDB (AdventureWorks2012, NOINDEX);

Checking the current database, suppressing informational messages

The following example checks the current database and suppresses all informational messages
DBCC CHECKDB WITH NO_INFOMSGS;
2) Difference between SQL patching in 2005 and 2008 Cluster environment
Installing Service Pack SQL Server 2008 in fail over cluster is very different than the SQL Server 2005 cluster failover.
With SQL Server 2005, when you start installing cluster service pack (or hot fix), it must be launched on the active node (node that hosts the instance). When installing the Setup will launch simultaneously  “remote silence” on all passive nodes. All nodes in the cluster containing the SQL Server instance are updated in the same time.
With SQL Server 2008, to reduce the downtime, we have revised the method of deployment. Now if you want to apply a service pack (or hot fix), you must install in first on the passive nodes. The passive nodes are updated before the active node.
Therefore, for your instance SQL Server 2008  in fail over cluster, you must follow the scenario below for the application of Service Pack, Cumulative Update or Hot fix :
1.  Apply the hot fix on passive node N 2
2.  Reboot the passive node N 2
3.  Fail over on SQL resource : the passive node become the active node
4.  Apply the hot fix on the passive node N 1
5.  Reboot the passive node N 1
3)log shipping Job break with  maintenance plan of log backup how to resolve?



if log shipping should not break with Full Backp/ Differential Backup but it will break with Log backup on primary

Step1



we need to stop maintenance plan of log backup



Step2

To resolve this we need to manually restore backedup of Log backup on maintenance plan on secondary then automatically log backup will resume from next time

4)Which DMV can be used to check how many dirty pages exists in the memory for each database?

We can use the following query, which is based on the sys.dm_os_buffer_descriptors DMV, to see how many dirty pages exist in each database:
SELECT db_name(database_id) AS 'Database',count(page_id) AS 'Dirty Pages'
FROM sys.dm_os_buffer_descriptors WHERE is_modified =1 GROUP BY db_name(database_id)ORDER BY count(page_id) DESC
5)What is SQL Server Transaction Log Architecture 

Every SQL Server database has a transaction log that records all transactions and the database modifications that are made by each transaction. The transaction log is a critical component of the database and, if there is a system failure, the transaction log might be required to bring your database back to a consistent state.

The SQL Server transaction log operates logically as if the transaction log is a string of log records. Each log record is identified by a log sequence number (LSN). Each new log record is written to the logical end of the log with an LSN that is higher than the LSN of the record before it. Log records are stored in a serial sequence as they are created. Each log record contains the ID of the transaction that it belongs to. For each transaction, all log records associated with the transaction are individually linked in a chain using backward pointers that speed the rollback of the transaction.


The transaction log in a database maps over one or more physical files. Conceptually, the log file is a string of log records. Physically, the sequence of log records is stored efficiently in the set of physical files that implement the transaction log. There must be at least one log file for each database.
The SQL Server Database Engine divides each physical log file internally into a number of virtual log files. Virtual log files have no fixed size, and there is no fixed number of virtual log files for a physical log file. The Database Engine chooses the size of the virtual log files dynamically while it is creating or extending log files. The Database Engine tries to maintain a small number of virtual files. The size of the virtual files after a log file has been extended is the sum of the size of the existing log and the size of the new file increment. The size or number of virtual log files cannot be configured or set by administrators.
The only time virtual log files affect system performance is if the physical log files are defined by small size and growth_increment values. The size value is the initial size for the log file and the growth_increment value is the amount of space added to the file every time new space is required. If the log files grow to a large size because of many small increments, they will have many virtual log files. This can slow down database startup and also log backup and restore operations. We recommend that you assign log files a size value close to the final size required, and also have a relatively large growth_increment value. For more information about these parameters, see ALTER DATABASE File and Filegroup Options (Transact-SQL).

The transaction log is a wrap-around file. For example, consider a database with one physical log file divided into four virtual log files. When the database is created, the logical log file begins at the start of the physical log file. New log records are added at the end of the logical log and expand toward the end of the physical log. Log truncation frees any virtual logs whose records all appear in front of the minimum recovery log sequence number (MinLSN). The MinLSN is the log sequence number of the oldest log record that is required for a successful database-wide rollback. The transaction log in the example database would look similar to the one in the following illustration.

Transaction Log Physical Architecture

Log file divided into four virtual log files
When the end of the logical log reaches the end of the physical log file, the new log records wrap around to the start of the physical log file.
Log records wrap around to start of log file

What are the permissions required to view execution plans?
Either the user must be mapped to sysadmin, db_owner, db_creator or they will be granted the permission, “Show Plan”.GRANT SHOWPLAN TO [username]

What's the difference between peer-to-peer and merge replication with SQL Server?

  1. Peer-to-Peer Transactional Replication is typically used to support applications that distribute read operations across a number of server nodes.
Although peer-to-peer replication enables scaling out of read operations, write performance for the topology is like that for a single node, this is because ultimately all inserts, updates, and deletes are propagated to all nodes. If one of the nodes in the system fails, an application layer can redirect the writes for that node to another node, this is not a requirement but does maintain availability if a node fails.
  1. Merge Replication is bi-directional i.e. read and write operations are propagated to and from all nodes. Merge replication often requires the implementation of conflict resolution.

What are the jobs created on Transaction replication

https://technet.microsoft.com/en-us/library/ms151706(v=sql.105).aspx



https://technet.microsoft.com/en-us/library/ms151794(v=sql.105).aspx


In Transaction Replication first we have to create Distribution then Publisher and finally Subscribers. 

My Server Information:
Publishing database name: AdventureWorks
Distributor Server Name:   D1\DISTRIBUTOR
Publisher Server name:      P1\PUBLISHER
Subscriber Server Name:   S1\SUBSCRIBER
Publisher Name:                  Publisher1

1.Distribution Server Jobs:  
Here is the list of jobs will create once Distribution Server is configured. Following jobs will create in Distribution server only 

1. Agent history clean up: distribution
2. Distribution clean up: distribution
3. Expired subscription clean up
4. Reinitialize subscriptions having data validation failures
5. Replication agents checkup
6. Replication monitoring refresher for distribution.

7. S1\SUBSCRIBER-AdventureWorks-Publisher1-S1\SUBSCRIBER-3 

2.Publisher Server Jobs: 
Once Distributor is Configured successfully then we have to configure Publisher. The following jobs will create in Publisher server only

1.
 
P1\PUBLISHER-AdventureWorks-1
2. 
P1\PUBLISHER-AdventureWorks-Publisher1-1

3. Subscriber Server Jobs.
In Subscriber no jobs will create, only in Distributor one job will create while adding subscriber. 

S1\SUBSCRIBER-AdventureWorks-Publisher1-NARESH\SUBSCRIBER-3

How to use SQL Server Upgrade Advisor helps you prepare for upgrades  

from SQL2005/08/08 R2/2012 to SQL Server 2014

Ref:
https://msdn.microsoft.com/en-us/library/ms144256(v=sql.120).aspx

SQL Server Upgrade Advisor helps you prepare for upgrades to SQL Server 2014. Upgrade Advisor analyzes installed components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after you upgrade.

 you run Upgrade Advisor, the Upgrade Advisor Home page appears. From the Home page, you can run the following tools:
  • Upgrade Advisor Analysis Wizard
  • Upgrade Advisor Report Viewer
  • Upgrade Advisor Help
The first time that you use Upgrade Advisor, run the Upgrade Advisor Analysis Wizard to analyze SQL Server components. When the wizard finishes the analysis, view the resulting reports in the Upgrade Advisor Report Viewer. Each report provides links to information in Upgrade Advisor Help that will help you fix or reduce the effect of the known issues.
Upgrade Advisor analyzes the following SQL Server components:
  • Database Engine
  • Analysis Services
  • Reporting Services
  • Integration Services
The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files. Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures.
Output is in the form of an XML report. View the XML report by using the Upgrade Advisor report viewer.
Upgrade Advisor is available in a feature pack.
Prerequisites for installing and running Upgrade Advisor are as follows:
  • Windows Server 2008 SP2, Windows 7 SP1, and Windows Server 2008 R2 SP1.
  • Windows Installer beginning with version 4.5.
after you install Upgrade Advisor, you can open it from the Start menu:
  • Click Start, point to All Programs, point to Microsoft SQL Server 2014, and then click SQL Server 2014 Upgrade Advisor.


Consider the scenario you have 500 GB large DB you need migrate to other server
what is minimum downtime perform this activity using backup/restore?

Backup method we can ensure use Backup with compression method It will reduce size also restore to other server easily
Can we install  SQL server 2008 R2 on Windows server 2012?
Ref:
https://support.microsoft.com/en-us/help/2681562/using-sql-server-in-windows-8-and-later-versions-of-windows-operating-system
SQL 2008 R2 is supported on Windows 2012 but you need to apply the latest service packs for your SQL server version after you install SQL.






SQL Server



Windows 10 or Windows Server 2016



Windows server 2012/2012 R2
Windows 8
/Windows 8.1

Windows 7
SQL2016
SQL Server 2016 RTM
SQL Server 2016 RTM
SQL Server 2016 RTM
Not supported
SQL2014
SQL Server 2014 Service Pack 1 or a later update. 
SQL Server 2014 Service Pack 1 or a later update. 
SQL Server 2014 Service Pack 1 or a later update. 
Supported
SQL2012
SQL Server 2012 Service Pack 2 or a later update
You can install the release version of SQL Server 2012 or a later version. 
SQL Server 2012 Service Pack 2 or a later update
Supported
SQL2008 R2
SQL Server 2008 R2 is not supported on Windows 10 or Windows Server 2016.
Microsoft SQL Server 2008 R2 Service Pack 1 or a later update.
Microsoft SQL Server 2008 R2 Service Pack 1 or a later update.
Supported
SQL2008
SQL Server 2008 is not supported on Windows 10 or Windows Server 2016.
You must apply SQL Server 2008 Service Pack 3 or a later update.
You must apply SQL Server 2008 Service Pack 3 or a later update.
Supported
  SQL2005
Microsoft SQL Server 2005 (the release version and service packs) and earlier versions of SQL Server are not supported on Windows 10
Microsoft SQL Server 2005 (the release version and service packs) and earlier versions of SQL Server are not supported on Windows 2012
Microsoft SQL Server 2005 (the release version and service packs) and earlier versions of SQL Server are not supported on Windows 8/8.1
Supported












































Note:

Microsoft SQL Server 2005 (the release version and service packs) and earlier versions of SQL Server are not supported on Windows 10, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows 8.1, or Windows 8. You will receive a warning in the Action Center if Windows 10, Windows 8.1, or Windows 8 detects an instance of SQL Server 2005.

Extended support for SQL Server 2005 ended on April 12, 2016
If you are still running SQL Server 2005, you will no longer receive security updates and technical support. We recommend upgrading to SQL Server 2014 and Azure SQL Database to achieve breakthrough performance, 

What are the New Features Introduced in SQL 2014


In-Memory OLTP Engine


http://sqlmag.com/sql-server-2014/sql-server-2014-important-new-features


In-Memory OLTP engine is designed for high concurrency and uses a new optimistic concurrency control mechanism to eliminate locking delays


 In-Memory OLTP engine is supported on Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2 SP2. In addition, you need to be using the SQL Server 2014 Enterprise, Developer, or Evaluation edition. Notably, In-Memory OLTP won't be supported on the SQL Server 2014 Standard edition.

The In-Memory OLTP engine works with commodity server hardware, but it has a number of limitations. For instance, not all of the data types are supported. Some of the data types that aren't supported for memory-optimized tables include geography, hierarchyid, image, text, ntext, varchar(max), and xml. In addition, several database features can't be used with the new In-Memory OLTP capability. Database mirroring, snapshots, computed columns, triggers, clustered indexes, identity columns, FILESTREAM storage, and FOREIGN KEY, CHECK, and UNIQUE constraints aren't supported.

Enhanced Windows Server 2012 Integration

SQL Server 2014 provides improved integration with Windows Server 2012 R2 and Windows Server 2012. SQL Server 2014 will have the ability to scale up to 640 logical processors and 4TB of memory in a physical environment. It can scale up to 64 virtual processors and 1TB of memory when running on a virtual machine (VM).
SQL Server 2014 also provides a new solid state disk (SSD) integration capability that enables you to use SSD storage to expand SQL Server 2014's buffer pool. The new buffer pool enhancements can help increase performance in systems that have maxed out their memory capability by using high-speed nonvolatile RAM (NVRAM) in the SSD drives as an extension to SQL Server 2014's standard buffer pool. The new buffer pool extensions can provide the best performance gains for read-heavy OLTP workloads.

Enhancements to AlwaysOn Availability Groups

SQL Server 2014's AlwaysOn Availability Groups has been enhanced with support for additional secondary replicas and Windows Azure integration. First introduced with SQL Server 2012, AlwaysOn Availability Groups boosted SQL Server availability by providing the ability to protect multiple databases with up to four secondary replicas. In SQL Server 2014, Microsoft has enhanced AlwaysOn integration by expanding the maximum number of secondary replicas from four to eight. Readable secondary replicas are now available for read-only workloads, even when the primary replica is unavailable. SQL Server 2014 also provides Windows Azure AlwaysOn integration. This new integration feature enables you to create asynchronous availability group replicas in Windows Azure for disaster recovery. In the event of a local database outage, you can run your SQL Server databases from Windows Azure VMs. The new Windows Azure AlwaysOn availability options are fully integrated into SQL Server Management Studio (SSMS).

Enhancements to Backups

Database backups in SQL Server now support built-in database encryption. Previous releases all required a third-party product to encrypt database backups. The backup encryption process uses either a certificate or an asymmetric key to encrypt the data. The supported backup encryption algorithms are Advanced Encryption Standard (AES) 128, AES 192, AES 256, and Triple DES (3DES).
SQL Server 2014 also provides new Windows Azure integration to SQL Server's backup capabilities. You can specify a Windows Azure URL as the target for your SQL Server 2014 database backups. This new Windows Azure backup feature is fully integrated into SSMS.

Updateable Columnstore Indexes

Columnstore indexes are another of Microsoft's high performance in-memory technologies. Microsoft introduced the columnstore index in SQL Server 2012 to provide significantly improved performance for data warehousing types of queries. Microsoft states that for some types of queries, columnstore indexes can provide up to 10x performance improvements. However, in the original implementation of the columnstore indexes, the underlying table had to be read-only. SQL Server 2014 eliminates this restriction. The new updateable columnstore index enables updates to be performed to the underlying table without first needing to drop the columnstore index. A SQL Server 2014 columnstore index must use all of the columns in the table, and it can't be combined with other indexes.

SQL Server Data Tools for Business Intelligence

Previously known as Business Intelligence Development Studio (BIDS) and SQL Server Data Tools (SSDT), the new SQL Server Data Tools for BI (SSDT BI) is used to create SQL Server Analysis Services (SSAS) models, SQL Server Reporting Services (SSRS) reports, and SQL Server Integration Services (SSIS) packages. SSDT BI is based on Microsoft Visual Studio 2012 and supports SSAS and SSRS for SQL Server 2014 and earlier, but SSIS projects are limited to SQL Server 2014. In the pre-release version of SQL Server 2014 Community Technology Preview 2 (CTP2), SQL Server Setup doesn't install SSDT BI. Instead, you must download it separately from the Microsoft Download Center.

Power BI for Office 365 Integration

Power BI for Office 365 is Microsoft's cloud-based BI solution that leverages familiar Office 365 and Excel tools. Power BI for Office 365 provides business insights through data visualization and navigation capabilities. Power BI for Office 365 includes:
  • Power Pivot (formerly PowerPivot). This Excel 2010 and Excel 2013 ProPlus add-in enables Excel to perform data analysis on large volumes of data.
  • Power View. This Excel 2013 ProPlus add-in provides a Silverlight-based data visualization and navigation tool. Microsoft has extended Power View so that you can now use it with multidimensional models (OLAP cubes). Power View multidimensional models also support queries using Data Analysis Expressions (DAX). Power View's data visualization capabilities have also been enhanced. Power View now supports a number of data visualizations, including tables, matrixes, bubble charts, and geographical maps. To learn more about Power View's new multidimensional support, go to MSDN's Power View for Multidimensional Models web page.
  • Power Query (formerly code-named Data Explorer). This Excel 2013 add-in lets you discover and integrate data into Excel. It supports SQL Server data sources as well as external sources such as Windows Azure, text files, XML files, Open Data Protocol (OData) feeds, web pages, Hadoop data sets, and public data sets (e.g., Open Government data from data.gov).
  • Power Map (formerly code-named GeoFlow). This Excel 2013 ProPlus add-in provides 3D mapping visualizations.

Also see discontinued features on SQL 2014

https://msdn.microsoft.com/en-us/library/ms144262(v=sql.120).aspx

What's the difference between peer-to-peer replication and merge replication using SQL Server?

  1. Peer-to-Peer Transactional Replication is typically used to support applications that distribute read operations across a number of server nodes.
Although peer-to-peer replication enables scaling out of read operations, write performance for the topology is like that for a single node, this is because ultimately all inserts, updates, and deletes are propagated to all nodes. If one of the nodes in the system fails, an application layer can redirect the writes for that node to another node, this is not a requirement but does maintain availability if a node fails.
  1. Merge Replication is bi-directional i.e. read and wrtie operations are propogated to and from all nodes. Merge replication often requires the implementation of conflict resolution.

what is the difference between 32 bit SQL Server and 64 bit SQL Server Version?

http://searchsqlserver.techtarget.com/tip/32-bit-vs-the-64-bit-SQL-Server-performance-surge
The first main advantage 64-bit provides for SQL Server is the ability to directly deal with larger amounts of memory. The 32-bit system can directly address 4 GB of memory at the most; the only way to get to more memory than that in a 32-bit system is by using techniques like Address Windowing Extensions. Though it will allow you to use anywhere from 32 GB to 64 GB of RAM, it will cost you in performance. It's not possible to use much of this memory in a truly contiguous fashion -- it's often only useful for the sake of caching data pages, not performing actual work.
A 64-bit system, on the other hand, can directly address 1,024 gigabytes of physical memory, so the amount of data SQL Server can cache, hold directly in memory and perform live operations on, is larger by whole orders of magnitude. Operations that might take a long time or would require a lot of work in temporary tables in a 32-bit system can be done in-memory and much faster on a 64-bit system, since there's a lot less paging or blocking going on.
Sixty-four-bit environments also have the ability to support up to 64 processors. SQL Server was originally written to parallelize work as heavily as possible across physical and virtual processors, and in existing 32-bit setups it does so very effectively. If you move an existing SQL Server workload from a 32-bit, four-way system to, say, a 64-bit, 16-way system, the work will not only be spread out across more individual CPUs but also will be handled all the more efficiently.
The difference between 64-bit and 32-bit is not speed. 64-bit is not necessarily faster.
The main difference is that a 32-bit app can never address more than 4 GB of memory,
while a 64-bit app can address memory up to 16 exabytes or so.
What is the difference between SQL server 2005 32 bit and 64 bit.
Do we need to have OS 64 bit to install SQL 64 bit.
Can we restore 32 bit version database in 64 bit version and 64 bit db in 32 bit??
Ans : The database file formats are identical, as are the backup formats. You need a 64-bit OS to install 64-bit SQL.
Unless you’re talking about Itanium, a 64-bit server with 64-bit OS will run 32-bit software just fine.
The difference between 64-bit and 32-bit is not speed. 64-bit is not necessarily faster.

The main difference is that a 32-bit app can never address more than 4 GB of memory,
while a 64-bit app can address memory up to 16 exabytes or so.

What is Transaction level in SQL Server

Transaction Level:

Read Uncommitted (Low)

Read Committed(Default)

Repeatable Read

Snapshot

Serializable (High)

Common concurrency Problems

Dirty Reads

Lost Updates

Non Repeatable Read

Phantom Reads

Isolation Level
Dirty Reads
Lost Update
Non Repeatable Reads
Phantom Reads
Read Uncommitted
Yes
Yes
Yes
Yes
Read Committed
No
Yes
Yes
Yes
Repeatable Read
No
No
No
Yes
Snapshot
No
No
No
No
Serializable
No
No
No
No

What is difference between Serializable and Snapshot isolation level since does not have any concurrency side effects above table

Serializable isolation is implemented b acquiring locks which means the resources are locked for the duration of the current transaction. This isolation level does not have any concurrency side effects but at the cost of significant reduction in concurrency

Snapshot isolation does not acquire locks it maintains versioning in tempdb since snapshot isolation does not lock resources, it can significantly increase the number of concurrent transactions while providing the same level of data consistency as serializable isolation does

Read Committed(Default):

we can see committed data if one transaction update record on table of one column at that same time another transaction is trying to update same records it blocked.it needs to wait for first process to complete once complete and committed then we can read the data

Read Uncommitted 

One transaction is permitted to read data that has been modified by another transaction that has not yet been committed. so we can get dirty read. we also get same using No Lock Hint option 

Repeatable Read
Repeatable read isolation level ensures that the data that one transaction has read will be prevented from being updated or deleted by any other transaction
but it does not prevent new rows from being inserted by other transactions resulting in phantom read concurrency problem.

Snapshot 
it will take snapshot of committed records before other transaction modified.
It maintains versioning in tempdb since snapshot isolation does not lock resources

Serializable
Serializable isolation level ensure that the data that one transaction has read, will be prevented from being updated or deleted by any other transaction. It also prevents new rows from being inserted by other transactions, so this isolation level prevents both non-repeatable read and phantom read problems. it acquire locks

Dirty Reads

A dirty read Happens when one transaction is permitted to read data that has been modified by another transaction that has not yet been committed. 

if the first transaction is rolled back after the second reads the data the second transaction has dirty data that does not exists any more

Phantom Reads

Phantom Read happens when one transaction executes a query twice and get a different number of rows in the result set each time. This happens when a second transaction inserts a new row that matches the where clause of the query execute by the first transaction

Transaction 1

select * from emp where id between 1 and 3 (2 rows returned)

do some other work/wait--Wait for Delay '00:00:10'

select * from emp where id between 1 and 3 (3 rows returned)

Transaction 2

insert a new emp record  at the time of Transaction do some other work/wait

To resolve this we can issue serializable isolation level at transaction 1

this will place range lock between 1 and 3 it prevents any other transaction from inserting new rows with in that range


What is pre step and post step do you perform for Migration from SQL 2008 to SQL 2012

In place method

we need to take system db copy on source and upgrade to existing it more risks cant be roll back


Side by Side Method

Pre check

1.We need to take Backup of user database on source (SQL 2008) don't need to take system db backup

2.We need to script out logins on source (SQL 2008) using sp_help_revlogin script

3.We need to script out Jobs,Linked servers,Replication,Operators,Triggers

4. We need to configure Database Mail on destination we need to take SMTP info on notepad 

Process

1. We need to use Restore method to do on Destination (SQL 2014)

2. Change compatibility level as per SQL 2014 on each database

3. We need to set collation on each database

4. we need to apply all logins,Jobs,Linked servers,Replication,Operators,Triggers

5. Solve orphan users if possible

Post Check

1.  We need to set Memory settings as per new machine RAM size

2. We need to create tempdb ndf file as per core 

3. We Need to check linked servers,logins are working fine or not

4. we need to check application connectivity is working fine or not

Top 10 New Features in SQL Server 2016


Microsoft is releasing new and exciting features with new versions over the time. SQL Server 2016 also has lots of new features for database users and it has been reformed and made database users life easier. Real time operational analytics is amazing which is up to 30x fatser transactions with in-memory OLTP which performs operations from minutes to seconds. Here I am covering 10 new features in SQL Server 2016 which I think is great for database users though there are tons of new features which we will see in future articles in detail.
1. Stretch Databases
2. Always Encrypted
3. Dynamic Data Masking
4. Row Level Security
5. Multiple TempDB
6. JSON Support
7. PolyBase
8. Temporal Tables
9. Query Store
10. Advanced Analytics

1. Stretch Databases

With this new and exciting feature you can stretch or extend the storage of on-premise database to Microsoft Azure and can retain data for longer time without any table break as normally we create historical tables for huge tables. It minimizes storage cost as you can control it over Azure. Once data is stretched to cloud, data movement will be handled in background which totally query-able without any changes to applications. The major benefit is Azure storage is 80% less expensive and reduces on-premises maintenance.

2. Always Encrypted

SQL Server 2016 includes enhanced security layer as ‘Always Encrypted’ feature which encrypts the data in motion, in memory and at rest without impacting performance. Microsoft has tried to protect sensitive data like bank details, credit card details and other critical data among data owners, data managers, on-premise DBAs, Cloud Operators and unauthorized users etc. Always Encrypted driver automatically encrypts and decrypts sensitive data in client applications. Sensitive columns data are encrypted before passing to database SQL Server engine and maintains semantics.

3. Dynamic Data Masking

This feature in SQL Server 2016 helps to prevent unauthorized access to confidential and sensitive data where client can control over limits of data to mask without impacting application layer. The masking is applied on query output without modifying existing queries and protects data which clients don’t want to reveal. This feature is just an added advantage with existing available features in previous versions of SQL Server like Auditing, Encryption etc.

4. Row Level Security (RLS)

SQL Server 2016 has introduced new feature to uniquely control and protect data at row level called Row Level Security (R L S). You can restrict data access from particular users with total transparency with applications. This is again one more added advantage to control access at more granular level than GRANT, REVOKE and DENY permissions.

5. Multiple TempDB

As we know tempdb is re-created every time SQL Server is restarted and temporary objects are dropped automatically and backup or restore activity is not allowed on tempdb. In SQL Server 2016 you can setup the tempdb data files at Database Engine Configuration step during installation which automatically creates default number of files with respect to number of logical processors on the server up to a maximum of 8 tempdb files on servers having 8 or more cores. Performance is also taken care and temporary tables and variables caching allows drop and create activity very fast with reduced page allocation contention. Trace flag 1117 and 1118 is not required now and all tempdb files will auto grow at the same time.

6. JSON Support

Java Script Object Notation also called JSON in short, is one of the most commonly used data exchange format. Normally web browsers and other recent services use JSON text format. So there was a huge request for SQL Server to handle and process JSON text. Microsoft added this in-built feature to parse and process JSON formatted data and to query JSON data stored and to exchange relational data in SQL Server.

7. PolyBase

This feature allows you to use T-SQL statements to query Hadoop clusters or Azure Blob Storage and can create queries that joins semi-structured data with SQL Server relational data sets. PolyBase is a inbuilt feature of SQL Server 2016 and you do not need understanding of Java or other Hadoop related concepts or any extra tools. PolyBase concept is not new to us as this concept is already introduced in Microsoft’s massive Parallel Processing (MPP) appliance called Parallel Data Warehouse (PDW).

8. Temporal Tables

Temporal table is a new feature in SQL Server 2016, it is also called System Versioned tables which automatically keeps history of data in the table. The temporal table is physically a different table then the base table and you can use it as row versioning solution also. Temporal table allows you to query records which have been updated or deleted and regular table will return current data only. This feature is useful for auditing, versioning, monitoring and recovering modified data from history.

9. Query Store

Query Store as the name suggests, it captures history of queries, execution plans and run time statistics which we can review for performance. You can easily find performance variations caused by query plan changes and improve the same. Prior to SQL Server 2016 you must have seen execution plan by using DMVs which are available in plan cache but now historical execution plan will also be available for review.

10. Advanced Analytics

SQL Server 2016 comes with enhanced analysis with R language. Revolution Analytics is statistical open source company which Microsoft purchased recently. Now users don’t need to export data to run analysis in R as it is incorporated with SQL Server. R is built-in to your T-SQL including end-to-end mobile BI, high performance DW, mission critical OLTP, multi-threading and massive parallel processing with in-memory.
What is Buffer Pool Extension (BPE)in SQL 2014?

Buffer pool extension up to SSD (Solid State Disk)

provide the best performance gains for read-heavy OLTP workloads

Increases random I/O Processes with reduced Latency

Only clean pages are written to BPE

Wait type used for BPE I/O to completed is EC

Significantly slow down performance for large serial range scans

SQL Server 2014 also provides a new solid state disk (SSD) integration capability that enables you to use SSD storage to expand SQL Server 2014's buffer pool. The new buffer pool enhancements can help increase performance in systems that have maxed out their memory capability by using high-speed nonvolatile RAM (N V RAM) in the SSD drives as an extension to SQL Server 2014's standard buffer pool. The new buffer pool extensions can provide the best performance gains for read-heavy OLTP workloads


What is Parallelism and  MaxDOP (Maximum Degree of Parallelism) in SQL Server

Parallelism:

Parallelism takes a query and breaks up the tasks needed to complete the query in smaller bits and then assigns those tasks to different worker threads

MaxDOP:

The number of additional threads assigned to the query is controlled by the Max Degree of Parallelism setting

parallelism is not occur on following scenario

1) cost of the query is below the cost threshold

2) the SQL instance Max DOP is set to 1

3) Only 1 CPU is assigned to SQL Server

4) Outdated statistics on your data can cause the query optimizer to underestimate the cost of the query

Max DOP controls the amount of cores 


Cost threshold for parallelism control when Max DOP kicks in

How to set Max DOP:

Server name and click on properties and click and Advanced tab and under Parallelism 

change the max degree of parallelism setting


To the number of cores you want to allow a query to use


What are the IO affinity / CPU affinity in SQL Server