在C#中过滤大型List的最佳方式?

Rob*_*ert 4 c# linq performance caching

我们有一个ASP.NET MVC Web应用程序,它通过Entity Framework连接到SQL Server DB.此应用程序的主要任务之一是允许用户快速搜索和过滤包含存档值的巨大数据库表.

表结构非常简单:Timestamp(DateTime),StationId(int),DatapointId(int),Value(double).该表有一些在10到1亿行之间.我使用覆盖索引等优化了数据库表,但是当使用DatapointId,StationId,Time和Skipping进行过滤并且只获取我想要在页面上显示的部分时,用户体验仍然相当滞后.

所以我尝试了一种不同的方法:由于我们的服务器有很多RAM,我认为我们可以简单地将整个存档表加载到List<ArchiveRow>Web应用程序启动时,然后直接从此List获取数据而不是进行一轮-trip到数据库.这非常有效,将整个存档表(目前大约有1000万个条目)加载到List中大约需要9秒钟.这ArchiveRow是一个简单的对象,如下所示:

public class ArchiveResponse {
    public  int Length { get; set; }
    public int numShown { get; set; }
    public  int numFound { get; set; }
    public  int numTotal { get; set; }
    public  List<ArchiveRow> Rows { get; set; }
}
Run Code Online (Sandbox Code Playgroud)

因此:

public class ArchiveRow {
    public  int s { get; set; }
    public  int d { get; set; }
    public  DateTime t { get; set; }
    public  double v { get; set; }
}    
Run Code Online (Sandbox Code Playgroud)

当我现在尝试使用Linq查询从List获取所需数据时,查询数据库的速度已经更快,但是当按多个条件进行过滤时,它仍然很慢.例如,当我使用一个StationId和12个DatapointIds进行过滤时,检索25行的窗口大约需要5秒钟.我已经从过滤切换Where到使用连接,但我认为仍有改进的余地.有没有更好的方法来实现这样的缓存机制,同时尽可能降低内存消耗?是否有其他更适合此类目的的集合类型?

所以这里是从ArchiveCache列表中过滤和获取相关数据的代码:

// Total number of entries in archive cache
var numTotal = ArchiveCache.Count();

// Initial Linq query
ParallelQuery<ArchiveCacheValue> query = ArchiveCache.AsParallel();

// The request may contain StationIds that the user is interested in,
// so here's the filtering by StationIds with a join:
if (request.StationIds.Count > 0)
{
    query = from a in ArchiveCache.AsParallel()
             join b in request.StationIds.AsParallel()
             on a.StationId equals b
             select a;
}

// The request may contain DatapointIds that the user is interested in,
// so here's the filtering by DatapointIds with a join:
if (request.DatapointIds.Count > 0)
{
    query = from a in query.AsParallel()
             join b in request.DatapointIds.AsParallel()
             on a.DataPointId equals b
             select a;
}

// Number of matching entries after filtering and before windowing
int numFound = query.Count();

// Pagination: Select only the current window that needs to be shown on the page
var result = query.Skip(request.Start == 0 ? 0 : request.Start - 1).Take(request.Length);

// Number of entries on the current page that will be shown
int numShown = result.Count();

// Build a response object, serialize it to Json and return to client
// Note: The projection with the Rows is not a bottleneck, it is only done to
// shorten 'StationId' to 's' etc. At this point there are only 25 to 50 rows,
// so that is no problem and happens in way less than 1 ms
ArchiveResponse myResponse = new ArchiveResponse();
myResponse.Length = request.Length;
myResponse.numShown = numShown;
myResponse.numFound = numFound;
myResponse.numTotal = numTotal;
myResponse.Rows = result.Select(x => new archRow() { s = x.StationId, d = x.DataPointId, t = x.DateValue, v = x.Value }).ToList();

return JsonSerializer.ToJsonString(myResponse);
Run Code Online (Sandbox Code Playgroud)

更多细节:台站的数量通常在5到50之间,很少超过50.数据点的数量<7000.Web应用程序设置为64位,<gcAllowVeryLargeObjects enabled="true" />并在web.config中设置.

我真的很期待进一步的改进和建议.也许基于数组或类似的方法有一种完全不同的方法,没有 linq 会更好地执行?

Evk*_*Evk 5

您可以将存储调整为此特定查询类型.首先,从内存存档中创建字典:

ArchiveCacheByDatapoint = ArchiveCache.GroupBy(c => c.DataPointId)
            .ToDictionary(c => c.Key, c => c.ToList());
ArchiveCacheByStation = ArchiveCache.GroupBy(c => c.StationId)
            .ToDictionary(c => c.Key, c => c.ToList());
Run Code Online (Sandbox Code Playgroud)

然后在查询中使用这些词典:

bool hasStations = request.StationIds.Length > 0;
bool hasDatapoints = request.DatapointIds.Length > 0;            
int numFound = 0;
List<ArchiveCacheValue> result;
if (hasDatapoints && hasStations) {
    // special case - filter by both
    result = new List<ArchiveCacheValue>();
    // store station filter in hash set
    var stationsFilter = new HashSet<int>(request.StationIds);
    // first filter by datapoints, because you have more different datapoints than stations
    foreach (var datapointId in request.DatapointIds.OrderBy(c => c)) {                    
        foreach (var item in ArchiveCacheByDatapoint[datapointId]) {                        
            if (stationsFilter.Contains(item.StationId)) {
                // both datapoint and station matches filter - found item
                numFound++;
                if (numFound >= request.Start && result.Count < request.Length) {
                    // add to result list if matches paging criteria
                    result.Add(item);
                }
            }
        }
    }
}
else if (hasDatapoints) {                
    var query = Enumerable.Empty<ArchiveCacheValue>();                
    foreach (var datapoint in request.DatapointIds.OrderBy(c => c))
    {
        var list = ArchiveCacheByDatapoint[datapoint];
        numFound += list.Count;
        query = query.Concat(list);
    }
    // execute query just once
    result = query.Skip(request.Start).Take(request.Length).ToList();
}
else if (hasStations) {                
    var query = Enumerable.Empty<ArchiveCacheValue>();
    foreach (var station in request.StationIds.OrderBy(c => c))
    {
        var list = ArchiveCacheByStation[station];
        numFound += list.Count;
        query = query.Concat(list);
    }
    // execute query just once
    result = query.Skip(request.Start).Take(request.Length).ToList();
}
else {
    // no need to do Count()
    numFound = ArchiveCache.Count;
    // no need to Skip\Take here really, ArchiveCache is list\array
    // so you can use indexes which will be faster
    result = ArchiveCache.Skip(request.Start).Take(request.Length).ToList();
}

// Number of entries on the current page that will be shown
int numShown = result.Count;
Run Code Online (Sandbox Code Playgroud)

我已经测量了它并且在我的机器上它运行了1ms(有时长达10ms)我所尝试的所有类型的查询(仅按部分,仅通过数据点,按部分和数据点),包含1亿个项目.