我有一个用Go编写的应用程序正在进行消息处理,需要以20K/秒(可能更多)的速率从网络(UDP)获取消息,并且每个消息最多可以达到UDP数据包的最大长度(64KB-headersize) ,程序需要解码这个传入的数据包并编码成另一种格式并发送到另一个网络;
现在在 24core+64GB RAM 机器上运行正常,但偶尔会丢失一些数据包,编程模式已经遵循使用多个 go 例程/通道的管道,并且占用了整机 cpu 负载的 10%;因此它有可能使用更多的 CPU% 或 RAM 来处理所有 20K/s 消息,而不会丢失任何消息;然后我开始分析,在我在CPU配置文件中发现的这个分析runtime.mallocgc
之后,出现了最上面的一个,那就是垃圾收集器运行时,我怀疑这个GC可能是罪魁祸首,它挂起几毫秒(或几微秒)并丢失了一些数据包,并且一些最佳实践表明,切换到sync.Pool可能会有所帮助,但我切换到池似乎会导致更多CPU争用,并且丢失更多数据包且更频繁
(pprof) top20 -cum (sync|runtime)
245.99s of 458.81s total (53.61%)
Dropped 487 nodes (cum <= 22.94s)
Showing top 20 nodes out of 22 (cum >= 30.46s)
flat flat% sum% cum cum%
0 0% 0% 440.88s 96.09% runtime.goexit
1.91s 0.42% 1.75% 244.87s 53.37% sync.(*Pool).Get
64.42s 14.04% 15.79% 221.57s 48.29% sync.(*Pool).getSlow
94.29s 20.55% 36.56% 125.53s 27.36% sync.(*Mutex).Lock
1.62s 0.35% 36.91% 72.85s 15.88% runtime.systemstack
22.43s 4.89% 41.80% 60.81s 13.25% runtime.mallocgc
22.88s 4.99% 46.79% 51.75s 11.28% runtime.scanobject
1.78s 0.39% 47.17% 49.15s 10.71% runtime.newobject
26.72s 5.82% 53.00% 39.09s 8.52% sync.(*Mutex).Unlock
0.76s 0.17% 53.16% 33.74s 7.35% runtime.gcDrain
0 0% 53.16% 33.70s 7.35% runtime.gcBgMarkWorker
0 0% 53.16% 33.69s 7.34% runtime.gcBgMarkWorker.func2
Run Code Online (Sandbox Code Playgroud)
泳池的使用是标准
// create this one globally at program init
var rfpool = &sync.Pool{New: func() interface{} { return new(aPrivateStruct); }}
// get
rf := rfpool.Get().(*aPrivateStruct)
// put after done processing this message
rfpool.Put(rf)
Run Code Online (Sandbox Code Playgroud)
不确定我做错了吗?或者想知道还有哪些其他方法可以调整 GC 以使用更少的 CPU%?go版本是1.8
该列表显示在 golang.org 上的pool.getSlow src 到 pool.go 中发生了很多锁争用
(pprof) list sync.*.getSlow
Total: 7.65mins
ROUTINE ======================== sync.(*Pool).getSlow in /opt/go1.8/src/sync/pool.go
1.07mins 3.69mins (flat, cum) 48.29% of Total
. . 144: x = p.New()
. . 145: }
. . 146: return x
. . 147:}
. . 148:
80ms 80ms 149:func (p *Pool) getSlow() (x interface{}) {
. . 150: // See the comment in pin regarding ordering of the loads.
30ms 30ms 151: size := atomic.LoadUintptr(&p.localSize) // load-acquire
180ms 180ms 152: local := p.local // load-consume
. . 153: // Try to steal one element from other procs.
30ms 130ms 154: pid := runtime_procPin()
20ms 20ms 155: runtime_procUnpin()
730ms 730ms 156: for i := 0; i < int(size); i++ {
51.55s 51.55s 157: l := indexLocal(local, (pid+i+1)%int(size))
580ms 2.01mins 158: l.Lock()
10.65s 10.65s 159: last := len(l.shared) - 1
40ms 40ms 160: if last >= 0 {
. . 161: x = l.shared[last]
. . 162: l.shared = l.shared[:last]
. 10ms 163: l.Unlock()
. . 164: break
. . 165: }
490ms 37.59s 166: l.Unlock()
. . 167: }
40ms 40ms 168: return x
. . 169:}
. . 170:
. . 171:// pin pins the current goroutine to P, disables preemption and returns poolLocal pool for the P.
. . 172:// Caller must call runtime_procUnpin() when done with the pool.
. . 173:func (p *Pool) pin() *poolLocal {
Run Code Online (Sandbox Code Playgroud)
https://golang.org/pkg/sync/#Pool
作为短期对象的一部分维护的空闲列表不适合使用池,因为在这种情况下开销不能很好地摊销。让这些对象实现自己的空闲列表会更有效
https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables
http://golang-jp.org/doc/ effective_go.html#leaky_buffer