tha*_*guy 6 cassandra cassandra-3.0
我有一个Cassandra表,键看起来像这样:
PRIMARY KEY(("k1","k2"),"c1","c2"),)具有聚类顺序("c1"DESC,"c2"DESC);
当我完全约束查询时,它比我省略最后一个聚类键需要更长的时间.它还预先形成"添加到饲料记忆",无约束查询不会.为什么是这样?我知道以前这个查询不会将条目添加到memtable中,因为我已经将事物添加到memtable中时运行了自定义代码.此代码应仅在插入或修改内容时运行,但在我仅查询项目时开始运行.
编辑:我应该提到两个查询返回1行,它是相同的记录.
activity | timestamp | source | source_elapsed | client
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+---------------+----------------+------------
Execute CQL3 query | 2017-09-05 18:09:37.456000 | **.***.**.237 | 0 | ***.**.*.4
Parsing select c2 from feed where k1 = 'AAA' and k2 = 'BBB' and c1 = '2017-09-05T16:09:00.222Z' and c2 = 'CCC'; [SharedPool-Worker-1] | 2017-09-05 18:09:37.456000 | **.***.**.237 | 267 | ***.**.*.4
Preparing statement [SharedPool-Worker-1] | 2017-09-05 18:09:37.456000 | **.***.**.237 | 452 | ***.**.*.4
Executing single-partition query on feed [SharedPool-Worker-3] | 2017-09-05 18:09:37.457000 | **.***.**.237 | 1253 | ***.**.*.4
Acquiring sstable references [SharedPool-Worker-3] | 2017-09-05 18:09:37.457000 | **.***.**.237 | 1312 | ***.**.*.4
Merging memtable contents [SharedPool-Worker-3] | 2017-09-05 18:09:37.457000 | **.***.**.237 | 1370 | ***.**.*.4
Key cache hit for sstable 22 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 | 6939 | ***.**.*.4
Key cache hit for sstable 21 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 | 7077 | ***.**.*.4
Key cache hit for sstable 12 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 | 7137 | ***.**.*.4
Key cache hit for sstable 6 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 | 7194 | ***.**.*.4
Key cache hit for sstable 3 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 | 7249 | ***.**.*.4
Merging data from sstable 10 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 | 7362 | ***.**.*.4
Key cache hit for sstable 10 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 | 7429 | ***.**.*.4
Key cache hit for sstable 9 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 | 7489 | ***.**.*.4
Key cache hit for sstable 4 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 | 7628 | ***.**.*.4
Key cache hit for sstable 7 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 | 7720 | ***.**.*.4
Defragmenting requested data [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 | 7779 | ***.**.*.4
Adding to feed memtable [SharedPool-Worker-4] | 2017-09-05 18:09:37.464000 | **.***.**.237 | 7896 | ***.**.*.4
Read 1 live and 4 tombstone cells [SharedPool-Worker-3] | 2017-09-05 18:09:37.464000 | **.***.**.237 | 7932 | ***.**.*.4
Request complete | 2017-09-05 18:09:37.464092 | **.***.**.237 | 8092 | ***.**.*.4
activity | timestamp | source | source_elapsed | client
-------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+---------------+----------------+------------
Execute CQL3 query | 2017-09-05 18:09:44.703000 | **.***.**.237 | 0 | ***.**.*.4
Parsing select c2 from feed where k1 = 'AAA' and k2 = 'BBB' and c1 = '2017-09-05T16:09:00.222Z'; [SharedPool-Worker-1] | 2017-09-05 18:09:44.704000 | **.***.**.237 | 508 | ***.**.*.4
Preparing statement [SharedPool-Worker-1] | 2017-09-05 18:09:44.704000 | **.***.**.237 | 717 | ***.**.*.4
Executing single-partition query on feed [SharedPool-Worker-2] | 2017-09-05 18:09:44.704000 | **.***.**.237 | 1377 | ***.**.*.4
Acquiring sstable references [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 | 1499 | ***.**.*.4
Key cache hit for sstable 10 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 | 1730 | ***.**.*.4
Skipped 8/9 non-slice-intersecting sstables, included 5 due to tombstones [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 | 1804 | ***.**.*.4
Key cache hit for sstable 22 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 | 1858 | ***.**.*.4
Key cache hit for sstable 21 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 | 1908 | ***.**.*.4
Key cache hit for sstable 12 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 | 1951 | ***.**.*.4
Key cache hit for sstable 6 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705001 | **.***.**.237 | 2002 | ***.**.*.4
Key cache hit for sstable 3 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705001 | **.***.**.237 | 2037 | ***.**.*.4
Merged data from memtables and 6 sstables [SharedPool-Worker-2] | 2017-09-05 18:09:44.705001 | **.***.**.237 | 2252 | ***.**.*.4
Read 1 live and 4 tombstone cells [SharedPool-Worker-2] | 2017-09-05 18:09:44.705001 | **.***.**.237 | 2307 | ***.**.*.4
Request complete | 2017-09-05 18:09:44.705458 | **.***.**.237 | 2458 | ***.**.*.4
Run Code Online (Sandbox Code Playgroud)
Run Code Online (Sandbox Code Playgroud)cqlsh> show version [cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 3.4.2 | Native protocol v4]
这是一个很好的问题,你(帮助)提供了我们需要的所有信息来回答它!
您的第一个查询是点查找(因为您指定了两个群集键).第二个是切片.
如果我们查看跟踪,跟踪的明显区别是:
Skipped 8/9 non-slice-intersecting sstables, included 5 due to tombstones
Run Code Online (Sandbox Code Playgroud)
这是一个非常好的提示,我们正在采用两种不同的读取路径.您可以使用它来编码潜水,但长话短说,您用于点读的过滤器意味着您将以不同的顺序查询memtable/sstables - 对于点读取,我们按时间戳排序,对于切片,我们将尝试首先消除不相交的sstables.
代码中的注释提示 - 第一个:
/**
* Do a read by querying the memtable(s) first, and then each relevant sstables sequentially by order of the sstable
* max timestamp.
*
* This is used for names query in the hope of only having to query the 1 or 2 most recent query and then knowing nothing
* more recent could be in the older sstables (which we can only guarantee if we know exactly which row we queries, and if
* no collection or counters are included).
* This method assumes the filter is a {@code ClusteringIndexNamesFilter}.
*/
Run Code Online (Sandbox Code Playgroud)
第二个:
/*
* We have 2 main strategies:
* 1) We query memtables and sstables simulateneously. This is our most generic strategy and the one we use
* unless we have a names filter that we know we can optimize futher.
* 2) If we have a name filter (so we query specific rows), we can make a bet: that all column for all queried row
* will have data in the most recent sstable(s), thus saving us from reading older ones. This does imply we
* have a way to guarantee we have all the data for what is queried, which is only possible for name queries
* and if we have neither collections nor counters (indeed, for a collection, we can't guarantee an older sstable
* won't have some elements that weren't in the most recent sstables, and counters are intrinsically a collection
* of shards so have the same problem).
*/
Run Code Online (Sandbox Code Playgroud)
在您的情况下,如果返回的行恰好在memtable中,则第一次(点)读取会更快.此外,由于您有8个sstables,您可能正在使用STCS或TWCS - 如果您使用LCS,则可能是您将该分区压缩为~5 sstables,并且您(再次)具有更可预测的读取性能.
我知道以前这个查询不会将条目添加到memtable中,因为我已经将事物添加到memtable中时运行了自定义代码.此代码应仅在插入或修改内容时运行,但在我仅查询项目时开始运行.
默认情况下,读取路径都不应该向memtable添加任何内容,除非您正在阅读修复(即,除非复制副本之间的值不匹配,或者触发后台读取修复机会).请注意,切片查询比点查询更可能不匹配,因为它是基于扫描的 - 您将使用匹配的值读取修复任何/所有删除标记(逻辑删除)c1 = '2017-09-05T16:09:00.222Z'
编辑:我错过了跟踪中的一行:
Defragmenting requested data
Run Code Online (Sandbox Code Playgroud)
这表明你正在使用STCS并触及太多的sstables,因此将整个分区复制到memtable中以使将来的读取更快.当你开始触摸太多sstables时,这在STCS中是一个鲜为人知的优化,你可以使用LCS解决它.
| 归档时间: |
|
| 查看次数: |
227 次 |
| 最近记录: |