How do I determine the maximum usage size for SQL Server Express tables?

I ran into a problem with SQL Server Express that I have never encountered.

I have a table with 32 million rows in it, with this set of columns:

[Insp_ID] [nvarchar](8) NOT NULL,
[NodeID] [int] NULL,
[NodeVisible] [bit] NULL,
[Note] [nvarchar](max) NULL,
[Points] [int] NULL,
[QST_ID] [int] NULL,
[QST_Num] [int] NOT NULL,
[Response_Num] [int] NOT NULL,
[Value] [nvarchar](max) NULL,
[ValueID] [int] NULL,
[ValueVisible] [bit] NULL,

CONSTRAINT [QST2D_PK_i83] PRIMARY KEY CLUSTERED 
([Insp_ID] ASC, [QST_Num] ASC, [Response_Num] ASC)

      

The table is roughly 1900MB or maybe even a little more. It's hard to tell because I can hardly do anything on the table without getting this error:

Msg 802 There is not enough memory in the buffer pool.

As I understand it, SQL Server Express gets 1 GB for this. This happens when I try to change the primary key or do DBCC DBREINDEX

. The only way to get information from the database and back is using BCP, and this is very inconvenient (but interestingly, it works). BCP allows me to restructure a table (i.e. PK) and then return the data.

Anyway, further experiments: I reduced the number of rows to 8.2M, and the total size of the table was about 633 MB. You still get the same errors as out of memory. This is incomprehensible because it doesn't seem like a lot of data to me.

At this point, I dropped two columns nvarchar(max)

, which further reduced the size of the table to around 540MB. At this point, I no longer have enough memory.

I can't tell if the buffer is complaining about row count or table size, but it does feel like table size based on this anecdotal evidence.

Does anyone have a solution or understanding for this? I feel like we are barking the wrong tree using SQL Server Express - this is too bad because up to this point it is perfect for our needs.

+3


source to share


2 answers


I asked a similar question on the Microsoft forum and said that DBCC DBREINDEX is not supported in 2014. I find this odd since it works ... sometimes. But I cannot miss the language barrier. And I tried a similar statement every time:

ALTER INDEX ALL ON QST2D REBUILD;

      

It worked. Although I am running this query:

SELECT
 CASE
 WHEN database_id = 32767 THEN 'mssqlsystemresource'
 ELSE DB_NAME(database_id)
 END AS [Database],
 CONVERT(numeric(38,2),(8.0 / 1024) * COUNT(*)) AS [MB in buffer cache] 
FROM sys.dm_os_buffer_descriptors 
GROUP BY database_id 
ORDER BY 2 DESC; 

      



This database uses quite a bit of memory:

enter image description here

I kept updating the query during the rebuild and I saw it steal memory from other databases until they were left rattling. I'm surprised the assertion goes away using this large memory, however I don't know enough about SQL memory management to determine if it's just a buffer pool or many other pools (possibly including a buffer pool). I'm just happy that it works and it seems to be reliable so far.

Thing is, it worked with a 2 + GB table with 31M + rows and a 3 part combo key. This is more like what I expected. I still don't have a clear understanding of why DBCC DBREINDEX is failing, but changing the index is not when I believe they are doing similar things. Perhaps this is the case, i.e. One atomic operation, not a separate drop + create. The combined operation may be using more resources per shot. I can try to trace the database while this is happening just for my own edification, but since I seem to have solved my immediate problem, I think I should note this.

+1


source


From past experience, SQL Server Express has a 1GB RAM limit as well as a database size limit that depends on the version you are using.

There was a previous thread on SO on SQL Server Express Limitations that could help with general questions.

While digging into some curious information on MSDN, I found a nice little piece of text that I suppose can help you:



SQL Server requires sufficient memory to store data in memory-optimized tables and indexes. For row versioning, you must specify a memory size that is twice the expected size of memory-optimized tables and indexes. But the actual amount of memory required will depend on your workload. You must monitor your memory usage and correct it if necessary. The size of data in memory-optimized tables must not exceed the allowed pool percentage.

To see the size of a memory optimized table, see here for capturing it (for SQL 2014).

Other articles that might be helpful Maximum Rows in SQL Table? or Maximum Capacity Specification for SQL Server .

+1


source







All Articles