Counter of Festivals

Ashok Blog for SQL Learners and Beginners and Experts

Tuesday, 4 December 2012

Why is my log file full? & Common Table Expression(CTE) in Sql server 2005 and Solve maximum number of user connections has already been reached


“Why is my log file growing?” and “Why is my log file full” are heard regularly. Or, the variation, “I ran a full backup but the log file is still full/growing.” occasionally comes up. The people asking these questions are frequently, even appropriately, frantic. I’m writing this blog post for two reasons. First, to try to add a little bit of weight to what must surely be one of the most searched for phrases on the internet when it comes to SQL Server. Second, just to have a shorthand to answer the question, “Here, check my blog post.”
Right off the bat, database backups are not backing up the log. There. That’s out of the way. Yes, I’m serious. Database backups do not backup the log. They backup all committed data in the database (which might include some or all of what might be in your log file) and they backup some transactions that completed during the backup process as part of the cleanup at the end of the backup (that would be, by definition, be in the log file). But they do not backup the log file. Why is this important?
The transaction log on the database represents the work you are doing to that database. It records the row you inserted, the twenty you deleted and the five that were updated. It records the fact that you dropped a table or truncated a table (and yes, truncation is a logged operation). All of these logged events are written to the log. They are written to the log regardless of the recovery model. What recovery model you ask? Ah, and there we begin to hit our problem.
There are three recovery models, full, bulk-logged and simple. The first and third are the ones that most people use, so they’re all I’m going to worry about for this post. Full recovery means that committed transactions (transactions that have been successfully completed) that are written to the log are retained, even after a checkpoint operation. Simple recovery means that committed transactions are removed from the log when the checkpoint operation runs. A checkpoint is basically when everything in memory gets written out to disk (yes, there’s more to it, but that’s enough for our current conversation). Checkpoints occur at irregular intervals.
Assuming your database is in simple recovery, the log only needs to be big enough to hold the transactions that are uncommitted between checkpoints. Depending on the size and number of your transactions, this could mean a small log, or a huge one. Note, I have not said that your log shrinks at checkpoint. It does not. In simple recovery your log is emptied of committed transactions at checkpoint (and yes, I’m repeating myself). This means any space allocated for the log remains allocated for the log. This is one of the reasons that your log file grows and does not shrink. Now let’s talk about the big one.
I first mentioned full recovery and said that the logs are kept, even beyond a checkpoint. This means they stay in the log, waiting. More transactions are run, and assuming your log is growing by default, it gets bigger and more transactions are in the log waiting. This continues until you finally run a log backup process. This is completely independent from your full backups (although a marker for the last full backup is maintained as a part of the recovery process). The basic syntax looks like this:


BACKUP LOG MyDatabaseName
TO SomeLocationOrDevice

This is a problem for two basic reasons. By default, new databases are created in full recovery mode AND by default, no one has set up log backups on your system. That means it’s up to you. You have to set up log backups. You have to set them and you have to schedule them and you have to ensure they run if you want to recover your database to a point in time, which is also known as, Full Recovery.
This is the important point, yes, setting the database to simple recovery can largely eliminate the problem of the ever-growing log file. But, simple recovery takes away your ability to recover to a point in time, meaning, crash occurs at 4:57PM. Your last full backup was at 6:00AM. In simple recovery mode, you just lost almost eleven hours worth of business because you can only recovery to the full backup. With Full Recovery, you can do what is known as a tail log backup, and take that in combination with all the other backups and recover your database, at least up to the last log backup, but possibly even right up to when the error occurred.
You need to ask your business, how much data are they prepared to lose. If they can deal with eleven hours, like our example above, great, you don’t need log backups. If, like most businesses I’ve worked with, they expect to recover everything, all the time, you’d better have log backups and full recovery.

select log_reuse_wait, log_reuse_wait_desc ,*          from sys.databases


       log_reuse_wait:


Reuse of transaction log space is currently waiting on one of the following:
0 = Nothing
1 = Checkpoint
2 = Log backup
3 = Active backup or restore
4 = Active transaction
5 = Database mirroring
6 = Replication
7 = Database snapshot creation
8 = Log Scan
9 = Other (transient)
log_reuse_wait_desc:

                     Description of reuse of transaction log space is currently waiting on one of the following:
NOTHING
CHECKPOINT
LOG_BACKUP
If the reason is LOG_BACKUP, it may take two backups to actually free the space.
ms178534.note(en-US,SQL.90).gif
ACTIVE_BACKUP_OR_RESTORE
ACTIVE_TRANSACTION
DATABASE_MIRRORING
REPLICATION
DATABASE_SNAPSHOT_CREATION
LOG_SCAN
OTHER_TRANSIENT


Common Table Expression(CTE) in SQL Server 2005:


Most of the developers while writing the stored procedures they create the temp tables or table variables. They need some table to store the temporary results in order to manipulate the data in the other tables based on this temp result.

The temp variables will be stored on the tempdb and it needs to be deleted in the tempdb database.

The table variable is best when compare with the temp tables. Because the table variable initially will be there in the memory for the certain limit of size and if the size increase then it will be moved to the temp database. However the scope of the table variable is only up to that program. When compare with table variable the CTE is best. It just store the result set like normal view.

CTE (Common Table Expression):

The CTE is one of the essential features in the sql server 2005.It just store the result as temp result set. It can be access like normal table or view. This is only up to that scope.

The syntax of the CTE is the following.

WITH name (Alias name of the retrieve result set fields)
AS
(
//Write the sql query here
)
SELECT * FROM name

Here the select statement must be very next to the CTE. The name is mandatory and the argument is an optional. This can be used to give the alias to the retrieve field of the CTE.

CTE 1: Simple CTE

WITH
ProductCTE
AS(  SELECT ProductID AS [ID],ProductName AS [Name],CategoryID AS [CID],UnitPrice AS [Price]
  FROM Products
)SELECT * FROM ProductCTE

Here all the product details like ID, name, category ID and Unit Price will be retrieved and stored as temporary result set in the ProductCTE.

This result set can be retrieved like table or view.

CTE2:Simple CTE with alias

WITH
ProductCTE(ID,Name,Category,Price)AS(  SELECT ProductID,ProductName,CategoryID,UnitPrice
  FROM Products
)SELECT * FROM ProductCTE

Here there are four fieds retrieves from the Products and the alias name have given in the arqument to the CTE result set name.

It also accepts like the following as it is in the normal select query.

WITH
ProductCTE
AS(  SELECT ProductID AS [ID],ProductName AS [Name],CategoryID AS [CID],UnitPrice AS [Price]
  FROM Products
)SELECT * FROM ProductCTE

CTE 3: CTE joins with normal table

The result set of the CTE can be joined with any table and also can enforce the relationship with the CTE and other tables.

WITH
OrderCustomer
AS(  SELECT DISTINCT CustomerID FROM Orders
)SELECT C.CustomerID,C.CompanyName,C.ContactName,C.Address+', '+C.City AS [Address] FROM Customers C INNER JOIN OrderCustomer OC ON OC.CustomerID = C.CustomerID

Here the Ordered Customers will be placed in the CTE result set and it will be joined with the Customers details.

CTE 4: Multiple resultsets in the CTE

WITH
MyCTE1
AS(  SELECT ProductID,SupplierID,CategoryID,UnitPrice,ProductName FROM Products
), 
MyCTE2
AS(  SELECT DISTINCT ProductID FROM "Order Details"
)SELECT C1.ProductID,C1.ProductName,C1.SupplierID,C1.CategoryID FROM MyCTE1 C1 INNER JOIN MyCTE2 C2 ON C1.ProductID = C2.ProductID

Here, there are two result sets that will be filtered based on the join condition.

CTE 5: Union statements in the CTE

WITH
PartProdCateSale
AS(SELECT ProductID FROM Products WHERE CategoryID = (SELECT CategoryID FROM Categories WHERE CategoryName='Condiments')UNION ALL
SELECT
ProductID FROM Products WHERE CategoryID = (SELECT CategoryID FROM Categories WHERE CategoryName='Seafood')
)
SELECT OD.ProductID,SUM(OD.UnitPrice*OD.Quantity) AS [Total Sale] FROM "Order Details" OD INNER JOIN PartProdCateSale PPCS ON PPCS.ProductID = OD.ProductID
GROUP BY OD.ProductID

Normally when we combine the many result sets we create table and then insert into that table. But see here, we have combined with the union all and instead of table, here CTE has used.

CTE 6: CTE with identity column

WITH
MyCustomCTE
   AS   (      SELECT CustomerID,row_number() OVER (ORDER BY CustomerID) AS iNo FROM
         Customers
   )SELECT * FROM MyCustomCTE


SQL SERVER – FIX: Could not connect because the maximum number of ‘1’ user connections has already been reached

We quickly started desktop sharing and we noticed the below error while connecting from SSMS.
TITLE: Connect to Database Engine
——————————
Cannot connect to DBSERVER.
——————————
ADDITIONAL INFORMATION:
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: Shared Memory Provider, error: 0 – No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233)
——————————
No process is on the other end of the pipe
——————————
BUTTONS:
OK
——————————
This was a very generic error, so I asked to check the Application Event Log.
Log Name:      Application
Source:        MSSQLSERVER
Event ID:      17809
Task Category: Logon
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      DBSERVER.SomeDomain.in
Description:
Could not connect because the maximum number of ‘1’ user connections has already been reached. The system administrator can use sp_configure to increase the maximum value. The connection has been closed.
As per above message the number of user connections was set to 1 and all we need to do is change that value to 0 as shown below
SQL SERVER - FIX: Could not connect because the maximum number of ‘1’ user connections has already been reached max-conn-01
Sounds simple, but the situation here was that someone was grabbing connection so quickly that we were not able to connect. I recalled an earlier blog, where I wrote about single user mode.
So, we followed the blog, start SQL with /mSQLCMD parameter, connected to the SQL server instance in single user mode via SQLCMD and then executed following command.
exec sp_configure 'user connections',0
go
reconfigure with override
go
Then we restarted SQL Server and after which we were able to connect successfully using SSMS and application also started working.

No comments:

Post a Comment