Chr*_*s L 16 join sql-server subquery
为什么 SQL Server 在运行使用子查询的查询时使用并行性,但在使用连接时不使用并行性?加入版本串行运行,需要大约 30 倍的时间才能完成。
加入版本:~30secs
子查询版本:<1 秒
编辑: Xml 版本的查询计划:
Mar*_*ith 12
As already indicated in the comments it looks as though you need to update your statistics.
The estimated number of rows coming out of the join between location
and testruns
is hugely different between the two plans.
Join plan estimates: 1
Sub query plan estimates: 8,748
The actual number of rows coming out of the join is 14,276.
Of course it makes absolutely no intuitive sense that the join version should estimate that 3 rows should come from location
and produce a single joined row whereas the sub query estimates that a single one of those rows will produce 8,748 from the same join but nonetheless I was able to reproduce this.
This seems to happen if there is no cross over between the histograms when the statistics are created. The join version assumes a single row. And the single equality seek of the sub query assumes the same estimated rows as an equality seek against an unknown variable.
The cardinality of testruns is 26244
. Assuming that is populated with three distinct location ids then the following query estimates that 8,748
rows will be returned (26244/3
)
declare @i int
SELECT *
FROM testruns AS tr
WHERE tr.location_id = @i
Run Code Online (Sandbox Code Playgroud)
Given that the table locations
only contains 3 rows it is easy (if we assume no foreign keys) to contrive a situation where the statistics are created and then the data is altered in a way that dramatically effects the actual number of rows returned but is insufficient to trip the auto update of stats and recompile threshold.
As SQL Server gets the number of rows coming out of that join so wrong all the other row estimates in the join plan are massively underestimated. As well as meaning that you get a serial plan the query also gets an insufficient memory grant and the sorts and hash joins spill to tempdb
.
One possible scenario that reproduces the actual vs estimated rows shown in your plan is below.
CREATE TABLE location
(
id INT CONSTRAINT locationpk PRIMARY KEY,
location VARCHAR(MAX) /*From the separate filter think you are using max?*/
)
/*Temporary ids these will be updated later*/
INSERT INTO location
VALUES (101, 'Coventry'),
(102, 'Nottingham'),
(103, 'Derby')
CREATE TABLE testruns
(
location_id INT
)
CREATE CLUSTERED INDEX IX ON testruns(location_id)
/*Add in 26244 rows of data split over three ids*/
INSERT INTO testruns
SELECT TOP (5984) 1
FROM master..spt_values v1, master..spt_values v2
UNION ALL
SELECT TOP (5984) 2
FROM master..spt_values v1, master..spt_values v2
UNION ALL
SELECT TOP (14276) 3
FROM master..spt_values v1, master..spt_values v2
/*Create statistics. The location_id histograms don't intersect at all*/
UPDATE STATISTICS location(locationpk) WITH FULLSCAN;
UPDATE STATISTICS testruns(IX) WITH FULLSCAN;
/* UPDATE location.id. Three row update is below recompile threshold*/
UPDATE location
SET id = id - 100
Run Code Online (Sandbox Code Playgroud)
Then running the following queries gives the same estimated vs actual discrepancy
SELECT *
FROM testruns AS tr
WHERE tr.location_id = (SELECT id
FROM location
WHERE location = 'Derby')
SELECT *
FROM testruns AS tr
JOIN location loc
ON tr.location_id = loc.id
WHERE loc.location = ( 'Derby' )
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
1271 次 |
最近记录: |