fja*_*sze 22 c++ algorithm performance skip-lists data-structures
我正在尝试使用最小的额外内存开销实现一个与BST一样好的跳过列表,目前即使不考虑任何内存限制,我的SkipList实现的性能也远远不是一个非常天真的平衡BST实现 - 所以说,手工制作的BTS :) -
作为参考,我使用了William Pugh PUG89的原始论文以及我在Sedgewick -13.5-的C中算法中发现的实现.我的代码是一个递归实现,这是插入和查找操作的线索:
sl_node* create_node()
{
short lvl {1};
while((dist2(en)<p)&&(lvl<max_level))
++lvl;
return new sl_node(lvl);
}
void insert_impl(sl_node* cur_node,
sl_node* new_node,
short lvl)
{
if(cur_node->next_node[lvl]==nullptr || cur_node->next_node[lvl]->value > new_node->value){
if(lvl<new_node->lvl){
new_node->next_node[lvl] = cur_node->next_node[lvl];
cur_node->next_node[lvl] = new_node;
}
if(lvl==0) return;
insert_impl(cur_node,new_node,lvl-1);
return;
}
insert_impl(cur_node->next_node[lvl],new_node,lvl);
}
sl_node* insert(long p_val)
{
sl_node* new_node = create_node();
new_node->value = p_val;
insert_impl(head, new_node,max_level-1);
return new_node;
}
Run Code Online (Sandbox Code Playgroud)
这是查找操作的代码:
sl_node* find_impl(sl_node* cur_node,
long p_val,
int lvl)
{
if(cur_node==nullptr) return nullptr;
if(cur_node->value==p_val) return cur_node;
if(cur_node->next_node[lvl] == nullptr || cur_node->next_node[lvl]->value>p_val){
if(lvl==0) return nullptr;
return find_impl(cur_node,p_val,lvl-1);
}
return find_impl(cur_node->next_node[lvl],p_val,lvl);
}
sl_node* find(long p_val)
{
return find_impl(head,p_val,max_level-1);
}
Run Code Online (Sandbox Code Playgroud)
sl_node -skip列表节点 - 如下所示:
struct sl_node
{
long value;
short lvl;
sl_node** next_node;
sl_node(int l) : lvl(l)
{
next_node = new sl_node*[l];
for(short i{0};i<l;i++)
next_node[i]=nullptr;
}
~sl_node()
{
delete[] next_node;
}
};
Run Code Online (Sandbox Code Playgroud)
正如你所看到的那样,实现没有什么特别的,也没有高级的,如果与书的实现相比,我不会共享Balaced BTS代码,因为我认为这里不需要,但是是一个非常基本的BTS,在此期间触发了重新平衡功能.当新节点高度大于16*lg(n)时插入,其中n是节点数.
所以说,只有当最大高度比最佳理论值高16倍时才重新平衡这三个,正如我所说,这个直接的自制BST比自制的跳过列表表现要好得多.
但首先,让我们看看一些数据,使用p = .5和n = 262144,SkipList中节点的级别具有以下分布:
1:141439, 53.9547%
2:65153, 24.8539%
3:30119, 11.4895%
4:13703, 5.22728%
5:6363, 2.42729%
6:2895, 1.10435%
7:1374, 0.524139%
8:581, 0.221634%
9:283, 0.107956%
10:117, 0.044632%
11:64, 0.0244141%
12:31, 0.0118256%
13:11, 0.00419617%
14:5, 0.00190735%
15:1, 0.00038147%
16:5, 0.00190735%
Run Code Online (Sandbox Code Playgroud)
这几乎完全 - 哦,这是一个很大的! - 与文章中的理论相匹配,即:50%级别1,25%级别2,依此类推.输入数据来自我最好的伪随机数生成器,即std :: random_device,带有std :: default_random_engine和统一的int分布.输入对我来说很随机:)!
在SkipList中以随机顺序搜索"全部"262144个元素所需的时间在我的机器上是315ms,而对于天真BTS上的相同搜索操作,所需时间是134ms,因此BTS几乎快两倍. SkipList.这并不是我所期望的"跳过列表算法与平衡树具有相同的渐近预期时间界限,并且简单,快速且占用空间更少" PUG89.
"插入"节点所需的时间对于SkipList是387ms,对于BTS是143ms,同样天真的BST表现更好.
如果不是使用输入数字的随机序列而是使用排序序列,那么事情变得更有趣了,这里我的可怜的自制BST变慢,并且插入262144排序的int需要2866ms而SkipList只需要168ms.
但是,当来到搜索时,BST仍然更快!对于排序的输入,我们有234ms而不是77ms,这个BST快3倍.
使用不同的p因子值,我得到的结果略有不同:
最后但并非最不重要的是,内存使用情况图,正如您所预期的那样,确认如果我们增加每个节点的级别数量,我们会显着影响内存指纹.内存使用量计算为存储所有节点的附加指针所需的空间总和.
在这之后,你们中的任何人都可以就如何实现与BTS一样好的SkipList进行评论 - 不计算额外的内存开销 - ?
我知道DrDobbs LINK关于SkipList 的文章,我通过所有文章,搜索和插入操作的代码完全匹配PUG89的原始实现,所以应该和我的一样好,文章没有提供任何性能无论如何分析,我没有找到任何其他来源.你能帮助我吗?
任何评论都非常感谢!
谢谢!:)
Dra*_*rgy 24
Times have changed a bit since William Pugh wrote his original paper. We see no mention in his paper about the memory hierarchy of the CPU and operating system which has become such a prevalent focus today (now often equally as important as algorithmic complexity).
His input case for benchmarking had a measly 2^16 elements, and hardware back then typically had, at most, 32-bit extended memory addressing available. This made the size of a pointer half the size or smaller than what we're used to today on 64-bit machines. Meanwhile a string field, e.g., could be just as large, making the ratio between the elements stored in the skip list and the pointers required by a skip node potentially a lot smaller, especially given that we often need a number of pointers per skip node.
C编译器在寄存器分配和指令选择等方面的优化并不那么激进.即使是普通的手写组装也可以提供显着的性能优势.在那段时间里,编译器暗示register并且inline实际上做了大量的事情.虽然这可能看起来有点没有实际意义,因为平衡的BST和跳过列表实现在这里是平等的,甚至基本循环的优化也是更加手动的过程.当优化是一个越来越多的手动过程时,更容易实现的东西通常更容易优化.跳过列表通常被认为比平衡树更容易实现.
因此,所有这些因素可能都与Pugh当时的结论有关.时代已经改变:硬件已经改变,操作系统发生了变化,编译器发生了变化,对这些主题进行了更多的研究,等等.
除此之外,让我们有一些乐趣并实现一个基本的跳过列表.我结束了适应现有的执行此处出于懒惰.这是一种普通的实现方式,与当今众多易于访问的示例性跳过列表实现几乎没有什么不同.
我们将比较我们实现的性能,std::set几乎总是将其实现为红黑树*.
*Some might wonder why I use 0 instead of nullptr and things of that sort. It's a habit. At my workplace, we still have to write open libraries that target a wide range of compilers including those that only support C++03, so I'm still used to writing lower/mid-level implementation code that way, and sometimes even in C, so please forgive the old style in which I wrote this code.
#include <iostream>
#include <algorithm>
#include <cstdlib>
#include <ctime>
#include <cmath>
#include <vector>
#include <cassert>
#include <cstring>
#include <set>
using namespace std;
static const int max_level = 32;
static const float probability = 0.5;
static double sys_time()
{
return static_cast<double>(clock()) / CLOCKS_PER_SEC;
}
static int random_level()
{
int lvl = 1;
while ((static_cast<float>(rand()) / RAND_MAX) < probability && lvl < max_level)
++lvl;
return lvl;
}
template <class T>
class SkipSet
{
public:
SkipSet(): head(0)
{
head = create_node(max_level, T());
level = 0;
}
~SkipSet()
{
while (head)
{
Node* to_destroy = head;
head = head->next[0];
destroy_node(to_destroy);
}
}
bool contains(const T& value) const
{
const Node* node = head;
for (int i=level; i >= 0; --i)
{
while (node->next[i] && node->next[i]->value < value)
node = node->next[i];
}
node = node->next[0];
return node && node->value == value;
}
void insert(const T& value)
{
Node* node = head;
Node* update[max_level + 1];
memset(update, 0, sizeof(Node*)*(max_level + 1));
for (int i = level; i >= 0; i--)
{
while (node->next[i] && node->next[i]->value < value)
node = node->next[i];
update[i] = node;
}
node = node->next[0];
if (!node || node->value != value)
{
int lvl = random_level();
assert(lvl >= 0);
if (lvl > level)
{
for (int i = level + 1; i <= lvl; i++) {
update[i] = head;
}
level = lvl;
}
node = create_node(lvl, value);
for (int i = 0; i <= lvl; i++) {
node->next[i] = update[i]->next[i];
update[i]->next[i] = node;
}
}
}
bool erase(const T& value)
{
Node* node = head;
Node* update[max_level + 1];
memset(update, 0, sizeof(Node*)*(max_level + 1));
for (int i = level; i >= 0; i--)
{
while (node->next[i] && node->next[i]->value < value)
node = node->next[i];
update[i] = node;
}
node = node->next[0];
if (node->value == value)
{
for (int i = 0; i <= level; i++) {
if (update[i]->next[i] != node)
break;
update[i]->next[i] = node->next[i];
}
destroy_node(node);
while (level > 0 && !head->next[level])
--level;
return true;
}
return false;
}
private:
struct Node
{
T value;
struct Node** next;
};
Node* create_node(int level, const T& new_value)
{
void* node_mem = malloc(sizeof(Node));
Node* new_node = static_cast<Node*>(node_mem);
new (&new_node->value) T(new_value);
void* next_mem = calloc(level+1, sizeof(Node*));
new_node->next = static_cast<Node**>(next_mem);
return new_node;
}
void destroy_node(Node* node)
{
node->value.~T();
free(node->next);
free(node);
}
Node* head;
int level;
};
template <class T>
bool contains(const std::set<T>& cont, const T& val)
{
return cont.find(val) != cont.end();
}
template <class T>
bool contains(const SkipSet<T>& cont, const T& val)
{
return cont.contains(val);
}
template <class Set, class T>
void benchmark(int num, const T* elements, const T* search_elements)
{
const double start_insert = sys_time();
Set element_set;
for (int j=0; j < num; ++j)
element_set.insert(elements[j]);
cout << "-- Inserted " << num << " elements in " << (sys_time() - start_insert) << " secs" << endl;
const double start_search = sys_time();
int num_found = 0;
for (int j=0; j < num; ++j)
{
if (contains(element_set, search_elements[j]))
++num_found;
}
cout << "-- Found " << num_found << " elements in " << (sys_time() - start_search) << " secs" << endl;
const double start_erase = sys_time();
int num_erased = 0;
for (int j=0; j < num; ++j)
{
if (element_set.erase(search_elements[j]))
++num_erased;
}
cout << "-- Erased " << num_found << " elements in " << (sys_time() - start_erase) << " secs" << endl;
}
int main()
{
const int num_elements = 200000;
vector<int> elements(num_elements);
for (int j=0; j < num_elements; ++j)
elements[j] = j;
random_shuffle(elements.begin(), elements.end());
vector<int> search_elements = elements;
random_shuffle(search_elements.begin(), search_elements.end());
typedef std::set<int> Set1;
typedef SkipSet<int> Set2;
for (int j=0; j < 3; ++j)
{
cout << "std::set" << endl;
benchmark<Set1>(num_elements, &elements[0], &search_elements[0]);
cout << endl;
cout << "SkipSet" << endl;
benchmark<Set2>(num_elements, &elements[0], &search_elements[0]);
cout << endl;
}
}
Run Code Online (Sandbox Code Playgroud)
On GCC 5.2, -O2, I get this:
std::set
-- Inserted 200000 elements in 0.104869 secs
-- Found 200000 elements in 0.078351 secs
-- Erased 200000 elements in 0.098208 secs
SkipSet
-- Inserted 200000 elements in 0.188765 secs
-- Found 200000 elements in 0.160895 secs
-- Erased 200000 elements in 0.162444 secs
Run Code Online (Sandbox Code Playgroud)
... which is pretty awful. We're around twice as slow all across the board.
Yet there is a glaring optimization we can make. If we look at Node, its current fields look like this:
struct Node
{
T value;
struct Node** next;
};
Run Code Online (Sandbox Code Playgroud)
This implies that the memory for the Node fields and its list of next pointers are two separate blocks, possibly with a very distant stride between them like so:
[Node fields]-------------------->[next0,next1,...,null]
Run Code Online (Sandbox Code Playgroud)
This does poorly for locality of reference. If we want to improve things here, we should merge these memory blocks into a single contiguous structure, like so:
[Node fields,next0,next1,...,null]
Run Code Online (Sandbox Code Playgroud)
We can achieve this through the variable-length struct idiom common in C. It's a little bit awkward to implement in C++ which doesn't support it so directly, but we can emulate the effect like so:
struct Node
{
T value;
struct Node* next[1];
};
Node* create_node(int level, const T& new_value)
{
void* node_mem = malloc(sizeof(Node) + level * sizeof(Node*));
Node* new_node = static_cast<Node*>(node_mem);
new (&new_node->value) T(new_value);
for (int j=0; j < level+1; ++j)
new_node->next[j] = 0;
return new_node;
}
void destroy_node(Node* node)
{
node->value.~T();
free(node);
}
Run Code Online (Sandbox Code Playgroud)
With this modest tweak, we now have these times:
SkipSet (Before)
-- Inserted 200000 elements in 0.188765 secs
-- Found 200000 elements in 0.160895 secs
-- Erased 200000 elements in 0.162444 secs
SkipSet (After)
-- Inserted 200000 elements in 0.132322 secs
-- Found 200000 elements in 0.127989 secs
-- Erased 200000 elements in 0.130889 secs
Run Code Online (Sandbox Code Playgroud)
... which gets us considerably closer to rivaling the performance of std::set.
A truly efficient skip list implementation will generally want a very fast RNG. Nevertheless, I found during a quick profiling session that only a very teeny portion of time is spent generating a random level/height, hardly enough to consider it much of a hotspot. It would also only impact the insertion times unless it provided a more uniform distribution, so I've decided to skip this optimization.
At this point, I'd say we have a pretty reasonable overview of what we can expect with a skip list implementation vs. a BST:
Insertion
-- std::set: 0.104869 secs
-- SkipList: 0.132322 secs
Search:
-- std::set: 0.078351 secs
-- SkipList: 0.127989 secs
Removal:
-- std::set: 0.098208 secs
-- SkipList: 0.130889 secs
Run Code Online (Sandbox Code Playgroud)
However, if we want to soldier on a little further, we can utilize a fixed allocator. At this point, we're cheating a little bit as std::set is designed to work with any general-purpose allocator which conforms to the interface requirements of a standard allocator. But let's have a look at using a fixed allocator:
#include <iostream>
#include <iomanip>
#include <algorithm>
#include <cstdlib>
#include <ctime>
#include <cmath>
#include <vector>
#include <cassert>
#include <set>
using namespace std;
static const int max_level = 32;
class FixedAlloc
{
public:
FixedAlloc(): root_block(0), free_element(0), type_size(0), block_size(0), block_num(0)
{
}
FixedAlloc(int itype_size, int iblock_size): root_block(0), free_element(0), type_size(0), block_size(0), block_num(0)
{
init(itype_size, iblock_size);
}
~FixedAlloc()
{
purge();
}
void init(int new_type_size, int new_block_size)
{
purge();
block_size = max(new_block_size, type_size);
type_size = max(new_type_size, static_cast<int>(sizeof(FreeElement)));
block_num = block_size / type_size;
}
void purge()
{
while (root_block)
{
Block* block = root_block;
root_block = root_block->next;
free(block);
}
free_element = 0;
}
void* allocate()
{
assert(type_size > 0);
if (free_element)
{
void* mem = free_element;
free_element = free_element->next_element;
return mem;
}
// Create new block.
void* new_block_mem = malloc(sizeof(Block) - 1 + type_size * block_num);
Block* new_block = static_cast<Block*>(new_block_mem);
new_block->next = root_block;
root_block = new_block;
// Push all but one of the new block's elements to the free pool.
char* mem = new_block->mem;
for (int j=1; j < block_num; ++j)
{
FreeElement* element = reinterpret_cast<FreeElement*>(mem + j * type_size);
element->next_element = free_element;
free_element = element;
}
return mem;
}
void deallocate(void* mem)
{
FreeElement* element = static_cast<FreeElement*>(mem);
element->next_element = free_element;
free_element = element;
}
void swap(FixedAlloc& other)
{
std::swap(free_element, other.free_element);
std::swap(root_block, other.root_block);
std::swap(type_size, other.type_size);
std::swap(block_size, other.block_size);
std::swap(block_num, other.block_num);
}
private:
struct Block
{
Block* next;
char mem[1];
};
struct FreeElement
{
struct FreeElement* next_element;
};
// Disable copying.
FixedAlloc(const FixedAlloc&);
FixedAlloc& operator=(const FixedAlloc&);
struct Block* root_block;
struct FreeElement* free_element;
int type_size;
int block_size;
int block_num;
};
static double sys_time()
{
return static_cast<double>(clock()) / CLOCKS_PER_SEC;
}
static int random_level()
{
int lvl = 1;
while (rand()%2 == 0 && lvl < max_level)
++lvl;
return lvl;
}
template <class T>
class SkipSet
{
public:
SkipSet(): head(0)
{
for (int j=0; j < max_level; ++j)
allocs[j].init(sizeof(Node) + (j+1)*sizeof(Node*), 4096);
head = create_node(max_level, T());
level = 0;
}
~SkipSet()
{
while (head)
{
Node* to_destroy = head;
head = head->next[0];
destroy_node(to_destroy);
}
}
bool contains(const T& value) const
{
const Node* node = head;
for (int i=level; i >= 0; --i)
{
while (node->next[i] && node->next[i]->value < value)
node = node->next[i];
}
node = node->next[0];
return node && node->value == value;
}
void insert(const T& value)
{
Node* node = head;
Node* update[max_level + 1] = {0};
for (int i=level; i >= 0; --i)
{
while (node->next[i] && node->next[i]->value < value)
node = node->next[i];
update[i] = node;
}
node = node->next[0];
if (!node || node->value != value)
{
const int lvl = random_level();
assert(lvl >= 0);
if (lvl > level)
{
for (int i = level + 1; i <= lvl; ++i)
update[i] = head;
level = lvl;
}
node = create_node(lvl, value);
for (int i = 0; i <= lvl; ++i)
{
node->next[i] = update[i]->next[i];
update[i]->next[i] = node;
}
}
}
bool erase(const T& value)
{
Node* node = head;
Node* update[max_level + 1] = {0};
for (int i=level; i >= 0; --i)
{
while (node->next[i] && node->next[i]->value < value)
node = node->next[i];
update[i] = node;
}
node = node->next[0];
if (node->value == value)
{
for (int i=0; i <= level; ++i) {
if (update[i]->next[i] != node)
break;
update[i]->next[i] = node->next[i];
}
destroy_node(node);
while (level > 0 && !head->next[level])
--level;
return true;
}
return false;
}
void swap(SkipSet<T>& other)
{
for (int j=0; j < max_level; ++j)
allocs[j].swap(other.allocs[j]);
std::swap(head, other.head);
std::swap(level, other.level);
}
private:
struct Node
{
T value;
int num;
struct Node* next[1];
};
Node* create_node(int level, const T& new_value)
{
void* node_mem = allocs[level-1].allocate();
Node* new_node = static_cast<Node*>(node_mem);
new (&new_node->value) T(new_value);
new_node->num = level;
for (int j=0; j < level+1; ++j)
new_node->next[j] = 0;
return new_node;
}
void destroy_node(Node* node)
{
node->value.~T();
allocs[node->num-1].deallocate(node);
}
FixedAlloc allocs[max_level];
Node* head;
int level;
};
template <class T>
bool contains(const std::set<T>& cont, const T& val)
{
return cont.find(val) != cont.end();
}
template <class T>
bool contains(const SkipSet<T>& cont, const T& val)
{
return cont.contains(val);
}
template <class Set, class T>
void benchmark(int num, const T* elements, const T* search_elements)
{
const double start_insert = sys_time();
Set element_set;
for (int j=0; j < num; ++j)
element_set.insert(elements[j]);
cout << "-- Inserted " << num << " elements in " << (sys_time() - start_insert) << " secs" << endl;
const double start_search = sys_time();
int num_found = 0;
for (int j=0; j < num; ++j)
{
if (contains(element_set, search_elements[j]))
++num_found;
}
cout << "-- Found " << num_found << " elements in " << (sys_time() - start_search) << " secs" << endl;
const double start_erase = sys_time();
int num_erased = 0;
for (int j=0; j < num; ++j)
{
if (element_set.erase(search_elements[j]))
++num_erased;
}
cout << "-- Erased " << num_found << " elements in " << (sys_time() - start_erase) << " secs" << endl;
}
int main()
{
const int num_elements = 200000;
vector<int> elements(num_elements);
for (int j=0; j < num_elements; ++j)
elements[j] = j;
random_shuffle(elements.begin(), elements.end());
vector<int> search_elements = elements;
random_shuffle(search_elements.begin(), search_elements.end());
typedef std::set<int> Set1;
typedef SkipSet<int> Set2;
cout << fixed << setprecision(3);
for (int j=0; j < 2; ++j)
{
cout << "std::set" << endl;
benchmark<Set1>(num_elements, &elements[0], &search_elements[0]);
cout << endl;
cout << "SkipSet" << endl;
benchmark<Set2>(num_elements, &elements[0], &search_elements[0]);
cout << endl;
}
}
Run Code Online (Sandbox Code Playgroud)
*I also made a minor tweak to random_level to make it simply assume a probability of 50% after discovering that this seems to work quite well.
By using a fixed allocator, we can quickly allocate and deallocate elements using a very simple constant-time strategy, and more importantly, allocate nodes in a way such that nodes with the same height (the most frequently accessed together) are allocated more contiguously with respect to each other (though not in an ideal sequential order). This improves the times to:
Insertion
-- std::set: 0.104869 secs
-- SkipList: 0.103632 secs
Search:
-- std::set: 0.078351 secs
-- SkipList: 0.089910 secs
Removal:
-- std::set: 0.098208 secs
-- SkipList: 0.089224 secs
Run Code Online (Sandbox Code Playgroud)
... which isn't bad for about 40 minutes of work against std::set which has been poked and prodded and tuned by experts all over the industry. We also have faster removals than std::set (insertion times fluctuate a bit on my machine, I'd say they are roughly on par).
Of course we cheated to apply this last step. Using a custom allocator tilts things in our favor, and also changes the characteristics of the container in a way such that its memory cannot be freed until it is cleared, destroyed, or compacted. It can mark the memory as unused and reclaim it upon subsequent insertions, but it cannot make the memory it uses available for those outside the skip list to use. I also didn't bother to pay attention to proper alignment for all possible types of T which would be necessary to truly generalize it.
Let's try using this against sorted input. To do so, simply comment out the two random_shuffle statements. On my end, I now get these times:
std::set
-- Inserted 200000 elements in 0.044 secs
-- Found 200000 elements in 0.023 secs
-- Erased 200000 elements in 0.019 secs
SkipSet
-- Inserted 200000 elements in 0.027 secs
-- Found 200000 elements in 0.023 secs
-- Erased 200000 elements in 0.016 secs
Run Code Online (Sandbox Code Playgroud)
... and now our SkipSet outperforms std::set in all areas, though just for this one kind of exceptional case.
So I don't know exactly what this says about skip lists. With hardly any time and effort, we got pretty close to rivaling std::set*. Yet we didn't beat it, and we had to cheat with a memory allocator to get really close.
*Note that mileage may vary across machines, vendor implementations of std::set, etc.
Times have changed quite a bit since the paper Pugh wrote in 1989.
Today the benefits of locality of reference dominates things to a point where even a linearithmic complexity algorithm can outperform a linear one provided that the former is considerably more cache or page-friendly. Paying close attention to the way the system grabs chunks of memory from upper levels of the memory hierarchy (ex: secondary stage) with slower but bigger memory and down to the little L1 cache line and teeny register is a bigger deal than ever before, and no longer "micro" if you ask me when the benefits can rival algorithmic improvements.
The skip list is potentially crippled here given the considerably larger size of nodes, and just as importantly, the variable size of nodes (which makes them difficult to allocate very efficiently).
One thing we didn't look at is the localized nature in which a skip list updates on insertion. This tends to impact much fewer areas than a balancing tree requires to rebalance by rotating parent nodes. As a result, a skip list can offer a potentially more efficient implementation of a concurrent set or map.
Another promising characteristic of a skip list is that it is simply a linked-list a the lowest level. As a result, it offers very simple linear-time sequential traversal without having to descend down the branches of the tree with linearithmic complexity, so it is potentially very good if we want to do a lot of linear-time iterations through all the elements contained.
But always remember:
A technique is only as good as it can be applied in the hands of the implementor.
| 归档时间: |
|
| 查看次数: |
1326 次 |
| 最近记录: |