优化按联接表中的字段对结果进行分组的查询

问题描述 投票:0回答:2

我有一个非常简单的查询,必须按联接表中的字段对结果进行分组:

SELECT SQL_NO_CACHE p.name, COUNT(1) FROM ycs_sales s
INNER JOIN ycs_products p ON s.id = p.sales_id 
WHERE s.dtm BETWEEN '2018-02-16 00:00:00' AND  '2018-02-22 23:59:59'
GROUP BY p.name

表ycs_products实际上是sales_products,列出了每次销售中的产品。我希望看到一段时间内销售的每种产品的份额。

当前查询速度为2秒,这对于用户交互来说太多了。我需要让这个查询快速运行。有没有办法摆脱没有非规范化的Using temporary

连接顺序非常重要,两个表中都有大量数据,并且按日期限制记录数是不容置疑的先决条件。

这里是解释结果

*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: s
         type: range
possible_keys: PRIMARY,dtm
          key: dtm
      key_len: 6
          ref: NULL
         rows: 1164728
        Extra: Using where; Using index; Using temporary; Using filesort
*************************** 2. row ***************************
           id: 1
  select_type: SIMPLE
        table: p
         type: ref
possible_keys: sales_id
          key: sales_id
      key_len: 5
          ref: test.s.id
         rows: 1
        Extra: 
2 rows in set (0.00 sec)

和json一样

EXPLAIN: {
  "query_block": {
    "select_id": 1,
    "filesort": {
      "sort_key": "p.`name`",
      "temporary_table": {
        "table": {
          "table_name": "s",
          "access_type": "range",
          "possible_keys": ["PRIMARY", "dtm"],
          "key": "dtm",
          "key_length": "6",
          "used_key_parts": ["dtm"],
          "rows": 1164728,
          "filtered": 100,
          "attached_condition": "s.dtm between '2018-02-16 00:00:00' and '2018-02-22 23:59:59'",
          "using_index": true
        },
        "table": {
          "table_name": "p",
          "access_type": "ref",
          "possible_keys": ["sales_id"],
          "key": "sales_id",
          "key_length": "5",
          "used_key_parts": ["sales_id"],
          "ref": ["test.s.id"],
          "rows": 1,
          "filtered": 100
        }
      }
    }
  }
}

以及创建表虽然我发现它是不必要的

    CREATE TABLE `ycs_sales` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `dtm` datetime DEFAULT NULL,
      PRIMARY KEY (`id`),
      KEY `dtm` (`dtm`)
    ) ENGINE=InnoDB AUTO_INCREMENT=2332802 DEFAULT CHARSET=latin1
    CREATE TABLE `ycs_products` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `sales_id` int(11) DEFAULT NULL,
      `name` varchar(255) DEFAULT NULL,
      PRIMARY KEY (`id`),
      KEY `sales_id` (`sales_id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=2332802 DEFAULT CHARSET=latin1

还有一个用于复制测试环境的PHP代码

#$pdo->query("set global innodb_flush_log_at_trx_commit = 2");
$pdo->query("create table ycs_sales (id int auto_increment primary key, dtm datetime)");
$stmt = $pdo->prepare("insert into ycs_sales values (null, ?)");
foreach (range(mktime(0,0,0,2,1,2018), mktime(0,0,0,2,28,2018)) as $stamp){
    $stmt->execute([date("Y-m-d", $stamp)]);
}
$max_id = $pdo->lastInsertId();
$pdo->query("alter table ycs_sales add key(dtm)");

$pdo->query("create table ycs_products (id int auto_increment primary key, sales_id int, name varchar(255))");
$stmt = $pdo->prepare("insert into ycs_products values (null, ?, ?)");
$products = ['food', 'drink', 'vape'];
foreach (range(1, $max_id) as $id){
    $stmt->execute([$id, $products[rand(0,2)]]);
}
$pdo->query("alter table ycs_products add key(sales_id)");
mysql sql join group-by query-optimization
2个回答
1
投票

问题是name的分组会让你丢失sales_id信息,因此MySQL被迫使用临时表。

虽然它不是最干净的解决方案,也是我最不喜欢的方法之一,但您可以在namesales_id列上添加新索引,例如:

ALTER TABLE `yourdb`.`ycs_products` 
ADD INDEX `name_sales_id_idx` (`name` ASC, `sales_id` ASC);

并强制查询使用此索引,使用force indexuse index

SELECT SQL_NO_CACHE p.name, COUNT(1) FROM ycs_sales s
INNER JOIN ycs_products p use index(name_sales_id_idx) ON s.id = p.sales_id 
WHERE s.dtm BETWEEN '2018-02-16 00:00:00' AND  '2018-02-22 23:59:59'
GROUP BY p.name;

我的执行只报告表p上的“使用where; using index”和表s上的“using where”。

无论如何,我强烈建议你重新考虑一下你的架构,因为你可能会为这两个表找到一些更好的设计。另一方面,如果这不是您的应用程序的关键部分,您可以处理“强制”索引。

编辑

由于问题很明显在设计中,我建议将关系绘制为多对多关系。如果您有机会在测试环境中验证它,我会这样做:

1)创建一个临时表,只是为了存储产品的名称和ID:

create temporary table tmp_prods
select min(id) id, name
from ycs_products
group by name;

2)从临时表开始,加入sales表以创建ycs_product的替换:

create table ycs_products_new
select * from tmp_prods;

ALTER TABLE `poc`.`ycs_products_new` 
CHANGE COLUMN `id` `id` INT(11) NOT NULL ,
ADD PRIMARY KEY (`id`);

3)创建连接表:

CREATE TABLE `prod_sale` (
`prod_id` INT(11) NOT NULL,
`sale_id` INT(11) NOT NULL,
PRIMARY KEY (`prod_id`, `sale_id`),
INDEX `sale_fk_idx` (`sale_id` ASC),
CONSTRAINT `prod_fk`
  FOREIGN KEY (`prod_id`)
  REFERENCES ycs_products_new (`id`)
  ON DELETE NO ACTION
  ON UPDATE NO ACTION,
CONSTRAINT `sale_fk`
  FOREIGN KEY (`sale_id`)
  REFERENCES ycs_sales (`id`)
  ON DELETE NO ACTION
  ON UPDATE NO ACTION);

并用现有值填充:

insert into prod_sale (prod_id, sale_id)
select tmp_prods.id, sales_id from ycs_sales s
inner join ycs_products p
on p.sales_id=s.id
inner join tmp_prods on tmp_prods.name=p.name;

最后,连接查询:

select name, count(name) from ycs_products_new p
inner join prod_sale ps on ps.prod_id=p.id
inner join ycs_sales s on s.id=ps.sale_id 
WHERE s.dtm BETWEEN '2018-02-16 00:00:00' AND  '2018-02-22 23:59:59'
group by p.id;

请注意,group by在主键上,而不是名称。

解释输出:

explain select name, count(name) from ycs_products_new p inner join prod_sale ps on ps.prod_id=p.id inner join ycs_sales s on s.id=ps.sale_id  WHERE s.dtm BETWEEN '2018-02-16 00:00:00' AND  '2018-02-22 23:59:59' group by p.id;
+------+-------------+-------+--------+---------------------+---------+---------+-----------------+------+-------------+
| id   | select_type | table | type   | possible_keys       | key     | key_len | ref             | rows | Extra       |
+------+-------------+-------+--------+---------------------+---------+---------+-----------------+------+-------------+
|    1 | SIMPLE      | p     | index  | PRIMARY             | PRIMARY | 4       | NULL            |    3 |             |
|    1 | SIMPLE      | ps    | ref    | PRIMARY,sale_fk_idx | PRIMARY | 4       | test.p.id       |    1 | Using index |
|    1 | SIMPLE      | s     | eq_ref | PRIMARY,dtm         | PRIMARY | 4       | test.ps.sale_id |    1 | Using where |
+------+-------------+-------+--------+---------------------+---------+---------+-----------------+------+-------------+

0
投票

为什么idycs_products?似乎sales_id应该是那张桌子的PRIMARY KEY

如果可能的话,它可以通过摆脱senape带来的问题来消除性能问题。

相反,如果每个sales_id有多行,那么将二级索引更改为此将有助于:

INDEX(sales_id, name)

要检查的另一件事是innodb_buffer_pool_size。它应该是可用RAM的大约70%。这将改善数据和索引的可缓存性。

那一周真的有110万行吗?

© www.soinside.com 2019 - 2024. All rights reserved.