SQL insert slows down by 1 million rows

With TOP 100000

(100k) this request completed in about 3 seconds

WITH TOP 1000000

(1 mil), this query completed in about 2 minutes

SELECT TOP 1000000
    db_id = IDENTITY(int, 1, 1), *
INTO dbo.tablename
FROM dbname.dbo.tablename

      

The actual execution plan is always:

clustered index scan 4% cost
top
top
compute scalar
insert (96% cost)
select into

      

The table has 1.3 mil rows and has an int primary key in the first column

Can I speed it up? I am using SQL Server 2008 R2.

+3


source to share


2 answers


Here is a complete script that shows that 1 million takes less than ten times more than 100,000. Your situation is probably a little different, but this shows that the basics are not an issue.

The results show that 100,000 records take 146 ms and 1,000,000 records take 1.315 ms.

These results are from my desktop. If anyone else can run the script and post their results, that would be very helpful.



Rob

USE master;
GO
-- Drop database SourceDB
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'SourceDB') ALTER DATABASE SourceDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'SourceDB') DROP DATABASE  SourceDB;
GO
-- Create database SourceDB
CREATE DATABASE SourceDB; 
ALTER DATABASE SourceDB SET RECOVERY SIMPLE;
GO
USE SourceDB;
GO
-- Create table SourceDB.dbo.SourceTable
CREATE TABLE dbo.SourceTable (
    ColID int PRIMARY KEY
);
GO
-- Populate table SourceDB.dbo.SourceTable
DECLARE @i int = 0;
WHILE @i < 1300000
BEGIN
    SET @i += 1;
    INSERT INTO dbo.SourceTable (ColID) VALUES (@i);
END;
GO
-- Drop database Test1
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'Test1') ALTER DATABASE Test1 SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'Test1') DROP DATABASE  Test1;
GO
-- Create database Test1
CREATE DATABASE Test1;
ALTER DATABASE Test1 SET RECOVERY SIMPLE;
ALTER DATABASE Test1 MODIFY FILE (NAME = Test1, SIZE = 3000MB, MAXSIZE = 8TB);
ALTER DATABASE Test1 MODIFY FILE (NAME = Test1_log, SIZE = 3000MB, MAXSIZE = 2TB);
GO
USE Test1;
GO
IF EXISTS (SELECT * FROM sys.tables WHERE [OBJECT_ID] = OBJECT_ID('dbo.DestinationTable1')) DROP TABLE dbo.DestinationTable1;
IF EXISTS (SELECT * FROM sys.tables WHERE [OBJECT_ID] = OBJECT_ID('dbo.DestinationTable2')) DROP TABLE dbo.DestinationTable2;
GO
DECLARE @n  int       = 100000;
DECLARE @t1 datetime2 = SYSDATETIME();
SELECT TOP (@n) db_id = IDENTITY(int, 1, 1), *
INTO dbo.DestinationTable1
FROM SourceDB.dbo.SourceTable;
SELECT DATEDIFF(ms, @t1, SYSDATETIME()) AS ElapsedMs;
GO
DECLARE @n  int       = 1000000;
DECLARE @t1 datetime2 = SYSDATETIME();
SELECT TOP (@n) db_id = IDENTITY(int, 1, 1), *
INTO dbo.DestinationTable2
FROM SourceDB.dbo.SourceTable;
SELECT DATEDIFF(ms, @t1, SYSDATETIME()) AS ElapsedMs;
GO

      

+1


source


The results showed that 100,000 records took 159ms and 1,000,000 records took 1435ms. On Raid 1 OS, Raid 1 Data, Raid 1 Log, Raid 1 TempDb, all separate disks. Our Dev environment.

The results showed that 100,000 records took 113ms and 1,000,000 records took 996ms. On my laptop with one SSD (Samsung 840 250GB). SSD rock !!!



The results showed that 100,000 records took 188ms and 1,000,000 records took 1880ms. On Raid 1 OS, Raid 10 Data, Raid 10 Log, Raid 1 TempDb, all individual disks are under production load.

+2


source







All Articles