如何在 SQL Server 中插入 100000 行?

问题描述 投票:0回答:6
INSERT INTO pantscolor_t (procode, color, pic) 
VALUES
('74251', 'Black', '511black.jpg'),
('74251', 'OD Green', '511odgreen.jpg'),
('74251', 'Black', '511black.jpg'),
('74251', 'OD Green', '511odgreen.jpg'),
('74251', 'Black', '511black.jpg'),
('74251', 'OD Green', '511odgreen.jpg'),
..........
..........
..........

INSERT INTO pantscolor_t (procode,color,pic)
VALUES
('74251', 'Charcoal', '511charcoal.jpg'),
('74251', 'Charcoal', '511charcoal.jpg'),
('74251', 'Charcoal', '511charcoal.jpg'),
('74251', 'Charcoal', '511charcoal.jpg'),
.............
.............
.............

 INSERT INTO........................
 INSERT INTO........................
 INSERT INTO........................
 INSERT INTO........................

我有 100000 行,但我的插入语句大于 1000 行。当我在 SSMS 中运行 SQL 语句时,出现错误:

INSERT 语句中的行值表达式的数量超过了允许的最大行值数量 1000 个。

sql sql-server sql-server-2008
6个回答
32
投票

另一种解决方案是使用带有联合的选择查询。

INSERT INTO pantscolor_t (procode,color,pic)
SELECT '74251', 'Black', '511black.jpg'
UNION ALL SELECT '74251', 'OD Green', '511odgreen.jpg'
UNION ALL SELECT '74251', 'Black', '511black.jpg'
UNION ALL SELECT '74251', 'OD Green', '511odgreen.jpg'
UNION ALL SELECT '74251', 'Black', '511black.jpg'
UNION ALL SELECT '74251', 'OD Green', '511odgreen.jpg'
--etc....
使用

UNION ALL
代替
UNION
是为了在处理数千条记录时加快查询速度。
UNION ALL
允许重复行,而
UNION
将确保结果集中不存在重复行。对于这种情况,我们不想删除任何可能的重复项,因此使用
UNION ALL


24
投票
INSERT mytable (col1, col2, col3, col4, col5, col6)
SELECT * FROM (VALUES
('1502577', '0', '114', 'chodba', 'Praha', 'Praha 1'),
('1503483', '0', 'TVP', 'chodba', 'Praha', 'Praha 2'),
/* ... more than 1000 rows ... */
('1608107', '0', '8', 'sklad', 'Tlumačov', 'Tlumačov'),
('1608107', '0', '9', 'sklad', 'Tlumačov', 'Tlumačov')
) AS temp (col1, col2, col3, col4, col5, col6);

23
投票

创建 csv 文件(或某些定义了字段分隔符和行分隔符的文件)并使用“BULK INSERT”选项将文件加载到数据库。文件可以有100000行;使用批量上传加载大文件不会有任何问题。

http://msdn.microsoft.com/en-us/library/ms188365.aspx


10
投票

通过应用以下内容,您不应该出现任何错误:

INSERT INTO pantscolor_t (procode,color,pic) VALUES ('74251','Black','511black.jpg')

INSERT INTO pantscolor_t (procode,color,pic) VALUES ('74251', 'OD Green', '511odgreen.jpg')

INSERT INTO pantscolor_t (procode,color,pic) VALUES ('74251', 'Black', '511black.jpg')

INSERT INTO pantscolor_t (procode,color,pic) VALUES ('74251', 'OD Green', '511odgreen.jpg')

INSERT INTO pantscolor_t (procode,color,pic) VALUES ('74251', 'Black', '511black.jpg')

...........

我尝试了一下,确实有效,当然你可以使用excel轻松连接这些值。


0
投票

创建 csv out.csv 等
然后使用:

批量插入 
来自“C:\out.csv”
和
(
字段终止符 = ',',
行终止符 = '
'
)
去

0
投票

前奏

我在这里尝试了其他一些手动插入 250,000 行,分为 5 组,每组 50,000 行。在决定亲自尝试解决方案之前,我进行了几次尝试,每次运行约 5 分钟(所有这些尝试都在完成之前手动取消)。

例如,user12408924的答案(下面引用)允许我尝试插入> 1,000行,但速度非常慢(作为参考,我插入到由11xVARCHAR(255)列组成的表中,1xVARCHAR(1) ) 列、2xDATE 列和 1xFLOAT 列,全部可为空)。

没有什么反对他们的答案,太棒了!它只是不适合我的特定用例。

INSERT mytable (col1, col2, col3, col4, col5, col6)
SELECT * FROM (VALUES
('1502577', '0', '114', 'chodba', 'Praha', 'Praha 1'),
('1503483', '0', 'TVP', 'chodba', 'Praha', 'Praha 2'),
/* ... more than 1000 rows ... */
('1608107', '0', '8', 'sklad', 'Tlumačov', 'Tlumačov'),
('1608107', '0', '9', 'sklad', 'Tlumačov', 'Tlumačov')
) AS temp (col1, col2, col3, col4, col5, col6);

解决方案:OPENJSON

因此,我使用了一个本来不应该起作用的简陋解决方案,但效果很好并且零问题,如上所述,每 50,000 个插入在 <3 seconds (obviously machine dependent, but prior attempts were >5 分钟内完成)。

以下批处理是行数和列数少得多的示例,但在其他方面是逐字记录的,因为我已成功执行它:

DECLARE @JsonArr NVARCHAR(MAX) = N'[
    {"UserID":"001","FirstName":"Alpha","LastName":"First","Email":"[email protected]","BirthDate":"1970-01-01"},
    {"UserID":"002","FirstName":"Bravo","LastName":"Second","Email":"[email protected]","BirthDate":"1970-01-02"},
    {"UserID":"003","FirstName":"Charlie","LastName":"Third","Email":"[email protected]","BirthDate":"1970-01-03"},
    {"UserID":"004","FirstName":"Delta","LastName":"Fourth","Email":"[email protected]","BirthDate":"1970-01-04"},
    {"UserID":"005","FirstName":"Foxtrot","LastName":"Fifth","Email":"[email protected]","BirthDate":"1970-01-05"},
    {"UserID":"006","FirstName":"Golf","LastName":"Sixth","Email":"[email protected]","BirthDate":"1970-01-06"},
    {"UserID":"007","FirstName":"Hotel","LastName":"Seventh","Email":"[email protected]","BirthDate":"1970-01-07"}
]';
INSERT INTO
    [DBName].[SchemaName].[TargetName]
        ([FirstName], [LastName], [Email], [BirthDate])
    SELECT [UserID], [FirstName], [LastName], [Email], [BirthDate]
        FROM OPENJSON(@JsonArr)
            WITH (
                    [UserID] VARCHAR(255)
                        COLLATE SQL_Latin1_General_CP1_CI_AS
                    , [FirstName] VARCHAR(255)
                        COLLATE SQL_Latin1_General_CP1_CI_AS
                    , [LastName] VARCHAR(255)
                        COLLATE SQL_Latin1_General_CP1_CI_AS
                    , [Email] VARCHAR(255)
                        COLLATE SQL_Latin1_General_CP1_CI_AS
                    , [BirthDate] DATE
                );

限制

MSSQL NVARCHAR 文档指定为 NVARCHAR(MAX) 分配的最大空间为 2GB:

可变大小的字符串数据。 n 的值定义以字节对为单位的字符串大小,可以从 1 到 4,000。 max表示最大存储大小为2^31-1个字符(2GB)。

© www.soinside.com 2019 - 2024. All rights reserved.