有多少MySQL行太多了?

Eri*_*ari 18 mysql

我正在开发一个网站,该网站主要使用一个包含组织表的数据库,每个组织一行.每个组织都可以拥有无​​限数量的附加关键字.关键字在与组织分开的表中表示,其中每一行只是主键,关键字和附加到的组织的主键.最终这个表可能有数千个条目.这会从此表中提取记录,以及在表格中列出唯一关键字,是否太费时间?

Pas*_*TIN 17

Having a couple of hundred thousands rows is perfectly fine, as long as :

  • they are indexed properly
  • and your queries are done properly (i.e. using the right indexes, for instance)

I'm working on an application that's doing lots of queries on several tables with a couple of hundred thousands records in each, with joins and not "simple" where clause, and that application is working fine -- well, since we've optimized the queries and indexes ^^


A couple of million rows, in those conditions, is OK too, I'd say -- depends on what kind of queries (and how many of those) you'll do ^^


In every case, there's only one way to know for sure :

  • You have to know what kind of queries you'll be doing,
  • You also have to have a large dataset to test,
  • And you have to benchmarking : launch the queries on your dataset, a lot of times, with concurrency, as if in "real conditions" -- and it'll help answer to the questions "will it handle the load ? do I have to optimize ? what are the bottlenecks ?"

  • 我维护了一个90GB的报告数据库(用于Web服务器日志),它有几个数百万行的表,最大的是318m.我可以在10到50毫秒内从标准选择查询中获得结果,并在此处(在中等负载下)进行连接. (24认同)
  • @Seth 6年后,现在还是大约10到50毫秒还是远远少于新硬件? (2认同)
  • 我怀疑性能特征很大程度上受旋转磁盘的影响.现代纺纱厂比6年前好,但不是很大.如果它在SSD上,我敢打赌它会快一个数量级.(那个数据库在几年前消失了,所以我无法测试真实的). (2认同)