Mas*_*sse 15 parallel-processing haskell
我试图围绕并行策略.我想我理解每个组合器的作用,但每次我尝试使用超过1个核心时,程序都会大大减慢.
例如前一段时间我试图从~700个文档中计算直方图(以及来自它们的独特单词).我认为使用文件级粒度是可以的.随着-N4我获得1.70的工作平衡.然而,-N1它的运行时间比它的运行时间少一半-N4.我不确定这个问题究竟是什么,但我想知道如何决定何时/何时/如何并行化并获得一些理解.如何将其并行化,以便速度随核心而不是降低而增加?
import Data.Map (Map)
import qualified Data.Map as M
import System.Directory
import Control.Applicative
import Data.Vector (Vector)
import qualified Data.Vector as V
import qualified Data.Text as T
import qualified Data.Text.IO as TI
import Data.Text (Text)
import System.FilePath ((</>))
import Control.Parallel.Strategies
import qualified Data.Set as S
import Data.Set (Set)
import GHC.Conc (pseq, numCapabilities)
import Data.List (foldl')
mapReduce stratm m stratr r xs = let
mapped = parMap stratm m xs
reduced = r mapped `using` stratr
in mapped `pseq` reduced
type Histogram = Map Text Int
rootDir = "/home/masse/Documents/text_conversion/"
finnishStop = ["minä", "sinä", "hän", "kuitenkin", "jälkeen", "mukaanlukien", "koska", "mutta", "jos", "kuitenkin", "kun", "kunnes", "sanoo", "sanoi", "sanoa", "miksi", "vielä", "sinun"]
englishStop = ["a","able","about","across","after","all","almost","also","am","among","an","and","any","are","as","at","be","because","been","but","by","can","cannot","could","dear","did","do","does","either","else","ever","every","for","from","get","got","had","has","have","he","her","hers","him","his","how","however","i","if","in","into","is","it","its","just","least","let","like","likely","may","me","might","most","must","my","neither","no","nor","not","of","off","often","on","only","or","other","our","own","rather","said","say","says","she","should","since","so","some","than","that","the","their","them","then","there","these","they","this","tis","to","too","twas","us","wants","was","we","were","what","when","where","which","while","who","whom","why","will","with","would","yet","you","your"]
isStopWord :: Text -> Bool
isStopWord x = x `elem` (finnishStop ++ englishStop)
textFiles :: IO [FilePath]
textFiles = map (rootDir </>) . filter (not . meta) <$> getDirectoryContents rootDir
where meta "." = True
meta ".." = True
meta _ = False
histogram :: Text -> Histogram
histogram = foldr (\k -> M.insertWith' (+) k 1) M.empty . filter (not . isStopWord) . T.words
wordList = do
files <- mapM TI.readFile =<< textFiles
return $ mapReduce rseq histogram rseq reduce files
where
reduce = M.unions
main = do
list <- wordList
print $ M.size list
Run Code Online (Sandbox Code Playgroud)
至于文本文件,我正在使用pdf转换为文本文件,因此我无法提供它们,但出于此目的,几乎所有来自项目gutenberg的书/书都应该这样做.
编辑:添加了脚本导入
在实践中,让并行组合器能够很好地扩展可能很困难。\n其他人提到让你的代码更加严格,以确保你实际上\n并行地完成工作,这绝对是重要的。
\n\n真正影响性能的两件事是大量内存遍历和垃圾收集。即使您没有产生大量垃圾,大量的内存遍历也会给 CPU 缓存带来更大的压力,最终您的内存总线会成为瓶颈。您的函数执行大量\n字符串比较,并且必须遍历相当长的链接列表才能执行此操作。\n您可以使用内置类型或更好的\n包中的类型isStopWord来节省大量工作(因为重复的字符串\n比较可能会很昂贵,特别是如果它们共享公共前缀)。SetHashSetunordered-containers
import Data.HashSet (HashSet)\nimport qualified Data.HashSet as S\n\n...\n\nfinnishStop :: [Text]\nfinnishStop = ["min\xc3\xa4", "sin\xc3\xa4", "h\xc3\xa4n", "kuitenkin", "j\xc3\xa4lkeen", "mukaanlukien", "koska", "mutta", "jos", "kuitenkin", "kun", "kunnes", "sanoo", "sanoi", "sanoa", "miksi", "viel\xc3\xa4", "sinun"]\nenglishStop :: [Text]\nenglishStop = ["a","able","about","across","after","all","almost","also","am","among","an","and","any","are","as","at","be","because","been","but","by","can","cannot","could","dear","did","do","does","either","else","ever","every","for","from","get","got","had","has","have","he","her","hers","him","his","how","however","i","if","in","into","is","it","its","just","least","let","like","likely","may","me","might","most","must","my","neither","no","nor","not","of","off","often","on","only","or","other","our","own","rather","said","say","says","she","should","since","so","some","than","that","the","their","them","then","there","these","they","this","tis","to","too","twas","us","wants","was","we","were","what","when","where","which","while","who","whom","why","will","with","would","yet","you","your"]\n\nstopWord :: HashSet Text\nstopWord = S.fromList (finnishStop ++ englishStop)\n\nisStopWord :: Text -> Bool\nisStopWord x = x `S.member` stopWord\nRun Code Online (Sandbox Code Playgroud)\n\n用这个版本替换你的isStopWord函数性能更好\n并且扩展性更好(尽管绝对不是 1-1)。您也可以考虑\n使用HashMap(来自同一个包)而不是出于Map相同的原因,\n但这样做并没有带来明显的变化。
另一种选择是增加默认堆大小,以减轻 GC 的一些压力,并为其提供更多移动空间。为编译后的代码提供 1GB 的默认堆大小(-H1G标志),我在 4 个核心上获得了大约 50% 的 GC 平衡,而在没有核心的情况下,我仅获得了约 25%(它的运行速度也快了约 30%)。
通过这两项更改,四个核心(在我的机器上)的平均运行时间从约 10.5 秒下降到约 3.5 秒。可以说,根据 GC 统计数据,\n还有改进的空间(仍然只花费 58% 的时间进行生产性工作),\n但要做得更好可能需要对算法进行更大幅度的更改。\n
\n