|
|
Descriptionruntime: make sure associated defers are copyable before trying to copy a stack.
Defers generated from cgo lie to us about their argument layout.
Mark those defers as not copyable.
CL 83820043 contains an additional test for this code and should be
checked in (and enabled) after this change is in.
Fixes bug 7695.
Patch Set 1 #Patch Set 2 : diff -r a6073283d113 https://khr%40golang.org@code.google.com/p/go/ #Patch Set 3 : diff -r a6073283d113 https://khr%40golang.org@code.google.com/p/go/ #Patch Set 4 : diff -r a6073283d113 https://khr%40golang.org@code.google.com/p/go/ #Patch Set 5 : diff -r bbf841e16510 https://khr%40golang.org@code.google.com/p/go/ #
MessagesTotal messages: 38
Hello golang-codereviews@googlegroups.com, I'd like you to review this change to https://khr%40golang.org@code.google.com/p/go/
Sign in to reply to this message.
*** Submitted as https://code.google.com/p/go/source/detail?r=3374b2b0759f *** runtime: make sure associated defers are copyable before trying to copy a stack. Defers generated from cgo lie to us about their argument layout. Mark those defers as not copyable. CL 83820043 contains an additional test for this code and should be checked in (and enabled) after this change is in. Fixes bug 7695. LGTM=rsc R=golang-codereviews, rsc CC=golang-codereviews https://codereview.appspot.com/84740043
Sign in to reply to this message.
Message was sent while issue was closed.
This CL appears to have broken the plan9-386-cnielsen builder. See http://build.golang.org/log/5af19cbb425255754504a6923b865189d457e852
Sign in to reply to this message.
Message was sent while issue was closed.
It seems TestDeferPtrs can run rather slowly, which lead the parallel runtime test to time out on the Plan 9 builder. cpu% cd /usr/go/src; GOMAXPROCS=2 go test -v runtime -run TestDeferPtrs -short -timeout 280s -cpu 1,2,4 === RUN TestDeferPtrs --- PASS: TestDeferPtrs (1.12 seconds) === RUN TestDeferPtrs-2 --- PASS: TestDeferPtrs-2 (51.81 seconds) === RUN TestDeferPtrs-4 --- PASS: TestDeferPtrs-4 (20.27 seconds) PASS ok runtime 92.107s Is it expected?
Sign in to reply to this message.
On OS X: g% 386 go test -v -run TestDeferPtrs -cpu=1,2,4 -short === RUN TestDeferPtrs --- PASS: TestDeferPtrs (0.48 seconds) === RUN TestDeferPtrs-2 --- PASS: TestDeferPtrs-2 (0.51 seconds) === RUN TestDeferPtrs-4 --- PASS: TestDeferPtrs-4 (0.71 seconds) PASS ok runtime 2.174s g% The fact that the slowdown you see varies so much with GOMAXPROCS suggests that the problem is specific to something about scheduling on Plan 9. My first guess would be that osyield is not doing what it should. The implementation (sleep(0)) looks correct to me but maybe the kernel doesn't do what we think it does. Can you try changing it to sleep(1)? That may slow other things down, but if it fixes this test we'll have a good idea where the problem is. Thanks. Russ
Sign in to reply to this message.
> The fact that the slowdown you see varies so much with GOMAXPROCS > suggests that the problem is specific to something about scheduling > on Plan 9. My first guess would be that osyield is not doing what it > should. The implementation (sleep(0)) looks correct to me but maybe > the kernel doesn't do what we think it does. Can you try changing it > to sleep(1)? That may slow other things down, but if it fixes this > test we'll have a good idea where the problem is. I've tried to change osyield to sleep(1) and I obtain similar results. In the current Plan 9 kernel, sleep(0) is implemented as yield(), so I think it should behave as expected. -- David du Colombier
Sign in to reply to this message.
What does 'time' say about user vs sys vs real?
Sign in to reply to this message.
> What does 'time' say about user vs sys vs real? cpu% time go test -v -run TestDeferPtrs -cpu 1,2,4 -short === RUN TestDeferPtrs --- PASS: TestDeferPtrs (1.05 seconds) === RUN TestDeferPtrs-2 --- PASS: TestDeferPtrs-2 (19.75 seconds) === RUN TestDeferPtrs-4 --- PASS: TestDeferPtrs-4 (47.14 seconds) PASS ok runtime 69.002s 7.60u 35.01s 75.00r go test -v -run TestDeferPtrs ... -- David du Colombier
Sign in to reply to this message.
does plan 9 have a system call tracer now?
Sign in to reply to this message.
> does plan 9 have a system call tracer now? Yes, Ron Minnich implemented ratrace in the Plan 9 kernel few years ago. http://9grid.fr/magic/man2html/1/ratrace -- David du Colombier
Sign in to reply to this message.
It is also quite surprising the results are better on slower machines (running almost the same kernel). A slower dev machine: cpu% time go test -v -run TestDeferPtrs -cpu 1,2,4 -short === RUN TestDeferPtrs --- PASS: TestDeferPtrs (1.34 seconds) === RUN TestDeferPtrs-2 --- PASS: TestDeferPtrs-2 (19.22 seconds) === RUN TestDeferPtrs-4 --- PASS: TestDeferPtrs-4 (18.85 seconds) PASS ok runtime 40.717s 7.65u 37.62s 47.15r go test -v -run TestDeferPtrs ... The old Go builder (much slower): cpu% time go test -v -run TestDeferPtrs -cpu 1,2,4 -short === RUN TestDeferPtrs --- PASS: TestDeferPtrs (4.36 seconds) === RUN TestDeferPtrs-2 --- PASS: TestDeferPtrs-2 (20.55 seconds) === RUN TestDeferPtrs-4 --- PASS: TestDeferPtrs-4 (20.31 seconds) PASS ok runtime 49.449s 30.44u 33.12s 66.93r go test -v -run TestDeferPtrs ... -- David du Colombier
Sign in to reply to this message.
Here is the trace (15 MB) when running: ratrace -c runtime.test -test.run TestDeferPtrs -test.cpu 1,2,4 -test.v http://www.9legacy.org/go/doc/runtime.trace -- David du Colombier
Sign in to reply to this message.
And naturally, the test is much faster when ran through ratrace. -- David du Colombier
Sign in to reply to this message.
How much is much faster?
Sign in to reply to this message.
> How much is much faster? On the trace I uploaded, I get: --- PASS: TestDeferPtrs (4.76.seconds) --- PASS: TestDeferPtrs-2 (7.63.seconds) --- PASS: TestDeferPtrs-4 (10.57.seconds) -- David du Colombier
Sign in to reply to this message.
FWIW, the trace is malformed. This is typical: 224014 runtime.test Sleep 2af87 1224020 runtime.test Tsemacquire 2afa7 0x10440a8c 0 = 1 "" 1396984274406373456 1396984274406377647 224020 runtime.test Pread 2af17 3 0x10483e88 8 0 0x10483e88/".c....]." 8 0 = 8 "" 1396984274406754789 1396984274406764008 = 1 "" 1396984274405982904 1396984274405987094 It looks like the different threads are running together. That's a shame. But maybe it's good enough to get the general idea. g% awk '/^[0-9]/ {time[$3]+=$(NF)-$(NF-1); n[$3]++} END{for(k in time) printf("%.3f %d %s\n", time[k]/1e9, n[k], k)}' z | 9 sort -n 0.000 1 Notify 0.000 1 Errstr 0.000 6 Rfork 0.000 7 Brk 0.000 1 Exits 0.007 5 Close 0.889 1508 Tsemacquire 1.691 1457 Sleep 4.802 9 Pwrite 4.877 38834 Open 4.908 5953 Semacquire 5.522 8152 Semrelease 14.615 51594 Pread g% Those 38,834 Open calls are opening /env/GOTRACEBACK over and over again, so we should cache that. https://codereview.appspot.com/85430046 The 51,594 Pread calls are reading /dev/bintime. I am less sure what to do about that. If it's just one call site that is causing the problem, we could address it there, but I don't see an obvious candidate. Maybe it doesn't matter as much as it appears to, but it sure looks like it matters a lot. I don't know why there are so many. There are 4 calls during each garbage collection, but there have not been 10,000+ garbage collections. There are two calls during notetsleep; have there been 25,000 notetsleeps? Each tsleep would call Tsemacquire or Semacquire, so probably not. Unmangling the log a little doesn't change the basic magnitudes: g% tr ' ' "$nl" < z | grep '^[A-Z][a-z]' | sort | uniq -c | sort -n 1 Errstr 1 Exits 1 Notify 5 Close 6 Rfork 7 Brk 9 Pwrite 2108 Sleep 2193 Tsemacquire 8362 Semacquire 9656 Semrelease 39580 Open 56766 Pread g% A completely unmangled log might be nice, but at least patch in 85430046 and see if it helps. Russ
Sign in to reply to this message.
With your CL 85430046 applied, I get: 1 Errstr 1 Exits 1 Notify 5 Close 5 Rfork 7 Brk 8 Pwrite 13 Open 1495 Sleep 2212 Tsemacquire 7648 Semacquire 8980 Semrelease 56110 Pread Which is already much better. However, the runtime test is still timing out. -- David du Colombier
Sign in to reply to this message.
Can you run -cpu=1 -cpu=2 and -cpu=4 separately and see whether the shape of the counts is different for the three cases? I still think this is a scheduling issue. How many CPUs does the machine have? Is runtime.ncpu set correctly?
Sign in to reply to this message.
> Can you run -cpu=1 -cpu=2 and -cpu=4 separately and see whether the > shape of the counts is different for the three cases? I still think > this is a scheduling issue. How many CPUs does the machine have? Is > runtime.ncpu set correctly? The machine have a single CPU. Runtime.ncpu is equal to 1. cpu% ratrace -c runtime.test -test.run TestDeferPtrs -test.cpu 1 -test.v === RUN TestDeferPtrs --- PASS: TestDeferPtrs (2.52 seconds) PASS 1 Errstr 1 Exits 1 Notify 1 Rfork 3 Pwrite 5 Close 6 Brk 12 Open 291 Semrelease 293 Semacquire 4835 Sleep 22400 Pread cpu% ratrace -c runtime.test -test.run TestDeferPtrs -test.cpu 2 -test.v === RUN TestDeferPtrs-2 --- PASS: TestDeferPtrs-2 (3.60 seconds) PASS 1 Errstr 1 Exits 1 Notify 4 Pwrite 4 Rfork 5 Close 6 Brk 13 Open 510 Tsemacquire 557 Sleep 2655 Semacquire 3012 Semrelease 27440 Pread cpu% ratrace -c runtime.test -test.run TestDeferPtrs -test.cpu 4 -test.v === RUN TestDeferPtrs-4 --- PASS: TestDeferPtrs-4 (7.52 seconds) PASS 1 Errstr 1 Exits 1 Notify 5 Close 5 Pwrite 5 Rfork 7 Brk 13 Open 868 Sleep 1706 Tsemacquire 5017 Semacquire 5993 Semrelease 28657 Pread -- David du Colombier
Sign in to reply to this message.
It may not be relevant, but maybe it is. I still want to see why we are reading the time so much. Here are all the calls I see in parts of the runtime used in a plan9 build: hashinit - 1 at startup alg.goc:498: int64 t = runtime·nanotime(); notetsleep - 1 + 1 per runtime.semasleep lock_sema.c:192: deadline = runtime·nanotime() + ns; lock_sema.c:205: ns = deadline - runtime·nanotime(); garbage collection - 5 per GC mgc0.c:2307: a.start_time = runtime·nanotime(); mgc0.c:2325: a.start_time = runtime·nanotime(); mgc0.c:2381: t1 = runtime·nanotime(); mgc0.c:2396: t2 = runtime·nanotime(); mgc0.c:2402: t3 = runtime·nanotime(); mgc0.c:2416: t4 = runtime·nanotime(); MHeap_FreeLocked - 1 per call mheap.c:416: s->unusedsince = runtime·nanotime(); MHeap_Scavenger - 1-2 per tick (minutes) mheap.c:534: now = runtime·nanotime(); mheap.c:547: now = runtime·nanotime(); net.runtimeNano - 1 per call from package net netpoll.goc:89: ns = runtime·nanotime(); runtime_pollSetDeadline - 1 per call from package net netpoll.goc:183: if(d != 0 && d <= runtime·nanotime()) time.now - 1 per call from package time os_plan9.c:153: ns = runtime·nanotime(); schedinit - 1 at startup proc.c:167: runtime·sched.lastpoll = runtime·nanotime(); goroutineheader - 1 per printed header proc.c:296: waitfor = (runtime·nanotime() - gp->waitsince) / (60LL*1000*1000*1000); findrunnable - 1 per call with network waiting proc.c:1233: runtime·atomicstore64(&runtime·sched.lastpoll, runtime·nanotime()); sysmon - 1 per cycle (10ms) proc.c:2542: now = runtime·nanotime(); schedtrace - 1 per call (debugging only) proc.c:2698: now = runtime·nanotime(); tickspersecond - a few per call to pprof.runtime_cyclesPerSecond runtime.c:308: t0 = runtime·nanotime(); runtime.c:311: t1 = runtime·nanotime(); time.runtimeNano - 1 per call from package time time.goc:34: ns = runtime·nanotime(); runtime.tsleep - 1 per call time.goc:81: t.when = runtime·nanotime() + ns; timerproc - 1 per loop (for active timers) time.goc:201: now = runtime·nanotime(); Of these, the one that looks most suspicious is the call in sysmon, which repeats at least every 10ms but possibly much faster. Unfortunately it only does one reading of the time per sleep, so that would only explain a small fraction of the time reads. I don't know. Can you run the program under acid and set a breakpoint at runtime.nanotime and see where all the calls are coming from? Even the cpu=1 case is far too slow. Another possible way to debug would be to copy and paste runtime.nanotime into a new function runtime.nanotime1, which would use a different fd. If you put calls to nanotime and then nanotime1 in runtime.osinit, then you'll be guaranteed nanotime uses fd 3 and nanotime1 uses fd 4. Then you can separate the two kinds of Pread in the trace, change some of the nanotime calls to nanotime1 and find which call site is the one repeating so much. Russ
Sign in to reply to this message.
> sysmon - 1 per cycle (10ms) > proc.c:2542: now = runtime·nanotime(); As expected, this call is not very frequent. 482 / 26456 calls > garbage collection - 5 per GC Most of the runtime·nanotime() calls come from mgc0.c. 24088 / 26518 calls > mgc0.c:2307: a.start_time = runtime·nanotime(); 3789 / 26470 calls > mgc0.c:2325: a.start_time = runtime·nanotime(); 4102 / 26462 calls > mgc0.c:2381: t1 = runtime·nanotime(); 3962 / 26462 calls > mgc0.c:2396: t2 = runtime·nanotime(); 4046 / 26494 calls > mgc0.c:2402: t3 = runtime·nanotime(); 4094 / 26514 calls > mgc0.c:2416: t4 = runtime·nanotime(); 4102 / 26466 calls By curiosity, I've tried to apply my patch to enable the use of the nanotime system call instead of reading from /dev/bintime (CL 52170044). After applying the patch, the TestDeferPtrs test is a bit faster and the runtime test usually succeed. But I still wonder why the garbage collector is called so often. -- David du Colombier
Sign in to reply to this message.
GC calls nanotime 6 times. If that is expensive, then there is an issue in platform implementation of nanotime. nanotime must be cheap one way or another. We also call nanotime once per timer setup, and that can produce much more nanotime calls than 6 per GC. On Wed, Apr 9, 2014 at 10:36 AM, David du Colombier <0intro@gmail.com> wrote: >> sysmon - 1 per cycle (10ms) >> proc.c:2542: now = runtime·nanotime(); > > As expected, this call is not very frequent. > > 482 / 26456 calls > >> garbage collection - 5 per GC > > Most of the runtime·nanotime() calls come from mgc0.c. > > 24088 / 26518 calls > >> mgc0.c:2307: a.start_time = runtime·nanotime(); > > 3789 / 26470 calls > >> mgc0.c:2325: a.start_time = runtime·nanotime(); > > 4102 / 26462 calls > >> mgc0.c:2381: t1 = runtime·nanotime(); > > 3962 / 26462 calls > >> mgc0.c:2396: t2 = runtime·nanotime(); > > 4046 / 26494 calls > >> mgc0.c:2402: t3 = runtime·nanotime(); > > 4094 / 26514 calls > >> mgc0.c:2416: t4 = runtime·nanotime(); > > 4102 / 26466 calls > > By curiosity, I've tried to apply my patch to enable the use > of the nanotime system call instead of reading from /dev/bintime > (CL 52170044). > > After applying the patch, the TestDeferPtrs test is a bit > faster and the runtime test usually succeed. But I still > wonder why the garbage collector is called so often. > > -- > David du Colombier > > -- > You received this message because you are subscribed to the Google Groups "golang-codereviews" group. > To unsubscribe from this group and stop receiving emails from it, send an email to golang-codereviews+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/d/optout.
Sign in to reply to this message.
Nanotime is much cheaper when implemented as a syscall. It's pour intention to move to a syscall implementation (CL 52170044) once everyone will have nanotime available in their kernel. -- David du Colombier
Sign in to reply to this message.
Yes, the improvement that comes with a syscall is dramatic, both in terms of overhead but also in terms of the accuracy of the values. Glad to hear you're going to do that :-) ron On Wed, Apr 9, 2014 at 12:03 AM, David du Colombier <0intro@gmail.com> wrote: > Nanotime is much cheaper when implemented as a syscall. It's pour intention > to move to a syscall implementation (CL 52170044) once everyone will have > nanotime available in their kernel. > > -- > David du Colombier
Sign in to reply to this message.
On Wed, Apr 9, 2014 at 9:03 AM, David du Colombier <0intro@gmail.com> wrote: > once everyone will have nanotime available in their kernel. This seems highly unlikely. -- Aram Hăvărneanu
Sign in to reply to this message.
I missed that growStackIter in stack_test.go calls GC. That explains why there are so many garbage collections, and thus why there are so many time calls. I sent CL 86040043 to cut the number of time calls by 3x. Now that we understand it, I am not so sure that the time is the problem. I was chasing it because it meant something else was happening far more than I expected. It's probably the collection itself, or the synchronization involved. Can you please run with GODEBUG=gctrace=1 and send that output? By the way, if it were me, I would make the Plan 9 port depend on the new syscall and force people to get a new kernel, assuming the code is in the standard distribution. Thanks. Russ
Sign in to reply to this message.
On Wed, Apr 9, 2014 at 4:31 PM, Russ Cox <rsc@golang.org> wrote: > assuming the code is in the standard distribution. That's the problem. The syscall is only in one of the 64-bit ports (there are two ports done independently), port which is not even in the standard distribution yet (after two years!). The reason there are now N forks (and not just the standard distribution) is because Geoff doesn't integrate patches way less controversial than this one. -- Aram Hăvărneanu
Sign in to reply to this message.
I've submitted the patch to Plan 9 only few months ago. It's hasn't be accepted yet, mainly because there were no real need until recently and I haven't tried to push it. When Ron Minnich wrote the first version of this patch for the amd64 kernel, as part of NxM, Jim McKie was positive about this change, so I don't think there is any reason it would not be accepted in Plan 9 today. http://plan9.bell-labs.com/sources/patch/maybe/sys-nanotime/ -- David du Colombier
Sign in to reply to this message.
Okay, well if it's not in the standard kernel then obviously don't worry about it. As I said, I don't believe the nanotime calls are the fundamental issue. They were representative of something happening a LOT and the question was what it was. We know now: it was garbage collection. It is probably the synchronization surrounding garbage collection that is causing the extra time, not the timing calls. The GODEBUG=gctrace=1 output will tell us more. Russ
Sign in to reply to this message.
> The GODEBUG=gctrace=1 output will tell us more. Here are the gc traces with -cpu=1 -cpu=2 and -cpu=4: http://www.9legacy.org/go/doc/gctrace.1 http://www.9legacy.org/go/doc/gctrace.2 http://www.9legacy.org/go/doc/gctrace.4 -- David du Colombier
Sign in to reply to this message.
[non-plan9 guys moved to bcc] On Wed, Apr 9, 2014 at 1:42 PM, David du Colombier <0intro@gmail.com> wrote: > > The GODEBUG=gctrace=1 output will tell us more. > > Here are the gc traces with -cpu=1 -cpu=2 and -cpu=4: > > http://www.9legacy.org/go/doc/gctrace.1 > http://www.9legacy.org/go/doc/gctrace.2 > http://www.9legacy.org/go/doc/gctrace.4 Thanks. That suggests that most of the time stoptheworld is fast but occasionally it takes 100ms, like this one from gctrace.2: gc3303(1): 0+0+118 ms, 0 -> 0 MB, 216 (307-91) objects, 42/0/37 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields It's not 100% clear though, because in the X+Y+Zms print, the Z is both the time spent waiting for stop the world and the time spent waiting for background workers to finish. There should be no background workers, because ncpu = 1, but you never know. (By the same token, since ncpu=1, the stoptheworld should be trivial, and yet the slow 3rd number only happens in gctrace.2 and gctrace.4, not gctrace.1.) Can you change the mgc0.c print? It says runtime·printf("gc%d(%d): %D+%D+%D ms, %D -> %D MB, %D (%D-%D) objects," " %d/%d/%d sweeps," " %D(%D) handoff, %D(%D) steal, %D/%D/%D yields\n", mstats.numgc, work.nproc, (t3-t2)/1000000, (t2-t1)/1000000, (t1-t0+t4-t3)/1000000, and I'd like it to say runtime·printf("gc%d(%d): %D+%D+%D+D us, %D -> %D MB, %D (%D-%D) objects," " %d/%d/%d sweeps," " %D(%D) handoff, %D(%D) steal, %D/%D/%D yields\n", mstats.numgc, work.nproc, (t1-t0)/1000, (t2-t1)/1000, (t3-t2)/1000, (t4-t3)/1000, The changes are: 1) an extra +%D in the format string 2) format string s/ms/us/ 3) there is an extra time delta in the argument list 4) all time deltas are /1000 instead of /1000000 5) time deltas are in order: t1-t0, t2-t1, t3-t2, t4-t3. Thanks.
Sign in to reply to this message.
Here are the gc traces with the new print: http://www.9legacy.org/go/doc/gctrace.print.1 http://www.9legacy.org/go/doc/gctrace.print.2 http://www.9legacy.org/go/doc/gctrace.print.4 -- David du Colombier
Sign in to reply to this message.
Here is a bug. Whether it's "the" bug is unclear. In os_plan9.c it says: int32 runtime·semasleep(int64 ns) { int32 ret; int32 ms; if(ns >= 0) { ms = runtime·timediv(ns, 1000000, nil); ret = runtime·plan9_tsemacquire(&m->waitsemacount, ms); if(ret == 1) return 0; // success return -1; // timeout or interrupted } while(runtime·plan9_semacquire(&m->waitsemacount, 1) < 0) { /* interrupted; try again (c.f. lock_sema.c) */ } return 0; // success } runtime.notetsleep and runtime.notetsleepg call notetsleep, which calls runtime.semasleep. There are only two cases where ns >= 0: in time.goc, the ns is the time to next timer, and in stoptheworld, we use 100us sleeps between preemption attempts. time.goc is not involved, but stoptheworld was already a suspect. If you pass ns = 100,000 to this function, timediv will return ms = 0. tsemacquire in /sys/src/9/port/sysproc.c will return immediately when ms == 0 and the semaphore cannot be acquired immediately - it doesn't sleep - so notetsleep will spin, chewing cpu and repeatedly reading the time, until the 100us have passed. Thanks to the time reads it won't take too many iterations, but whatever we are waiting for does not get a chance to run. Eventually the notetsleep spin loop returns and we end up in the stoptheworld spin loop - actually a sleep loop but we're not doing a good job of sleeping. After 100ms or so of this, the kernel says enough and schedules a different thread. That thread manages to do whatever we're waiting for, and the spinning in the other thread stops. If tsemacquire had actually slept, this would have happened much quicker. My suggestion is to try adding if(ms == 0) ms = 1; before the call to plan9_tsemacquire. In your new logs, something happens around gc2052 in http://www.9legacy.org/go/doc/gctrace.print.2. The system basically dies after that. My guess is that a second thread is trying to do things, and we're seeing this spinning behavior. I'll be curious to see if bumping to ms=1 helps. Russ
Sign in to reply to this message.
> Here is a bug. Whether it's "the" bug is unclear. Amazing find! It really seems to be "the" bug. After applying your fix, the runtime test passes successfully and TestDeferPtrs always finishes in constant time with -cpu 1,2,4. cpu% go test -v -run TestDeferPtrs -cpu 1,2,4 === RUN TestDeferPtrs --- PASS: TestDeferPtrs (1.06 seconds) === RUN TestDeferPtrs-2 --- PASS: TestDeferPtrs-2 (1.13 seconds) === RUN TestDeferPtrs-4 --- PASS: TestDeferPtrs-4 (1.13 seconds) PASS ok runtime 4.423s Here are the gc traces for reference: http://www.9legacy.org/go/doc/gctrace.msfix.1 http://www.9legacy.org/go/doc/gctrace.msfix.2 http://www.9legacy.org/go/doc/gctrace.msfix.4 -- David du Colombier
Sign in to reply to this message.
great. the rest of the build might be faster too.
Sign in to reply to this message.
send me a CL :-)
Sign in to reply to this message.
|