|
|
Descriptionruntime: Use same [monotonic] reference clock for all timer values
This lays the groundwork for making Go robust when the system's
calendar time jumps around. All input values to the runtimeTimer
struct now use the runtime clock as a common reference point.
This affects net.Conn.Set[Read|Write]Deadline(), time.Sleep(),
time.Timer, etc. Under normal conditions, behavior is unchanged.
Each platform and architecture's implementation of runtime·nanotime()
should be modified to use a monotonic system clock when possible.
Platforms/architectures modified and tested with monotonic clock:
linux/x86 - clock_gettime(CLOCK_MONOTONIC)
Update issue 6007
Patch Set 1 #Patch Set 2 : diff -r 761cf9b4e530 https://code.google.com/p/go/ #Patch Set 3 : diff -r 761cf9b4e530 https://code.google.com/p/go/ #Patch Set 4 : diff -r 322f646feecb https://code.google.com/p/go #Patch Set 5 : diff -r 322f646feecb https://code.google.com/p/go #
Total comments: 1
Patch Set 6 : diff -r 5b5a2d2d2563 https://code.google.com/p/go #
Total comments: 6
Patch Set 7 : diff -r 2df447356d7f https://code.google.com/p/go #
Total comments: 2
Patch Set 8 : diff -r da649606c628 https://code.google.com/p/go #Patch Set 9 : diff -r e4a4cb47c141 https://code.google.com/p/go #Patch Set 10 : diff -r 51d9b0cf3b8d https://code.google.com/p/go #
MessagesTotal messages: 34
Hello golang-codereviews@googlegroups.com, I'd like you to review this change to https://code.google.com/p/go/
Sign in to reply to this message.
LGTM When/if it is submitted, please also file an issue to support monotonic time on other OSes.
Sign in to reply to this message.
On 2014/01/16 07:23:34, dvyukov wrote: > LGTM > > When/if it is submitted, please also file an issue to support monotonic time on > other OSes. I planned to leave 6007 open until the main platforms have a monotonic runtime clock.
Sign in to reply to this message.
Jan 18 update: Implemented monotonic clock for Windows.
Sign in to reply to this message.
This fails on windows: C:\go\root\src>all.bat # Building C bootstrap tool. cmd/dist # Building compilers and Go bootstrap tool. lib9 libbio libmach liblink misc/pprof cmd/addr2line cmd/objdump cmd/prof cmd/cc cmd/gc cmd/8l cmd/8a cmd/8c cmd/8g pkg/runtime pkg/errors pkg/sync/atomic pkg/sync pkg/io pkg/unicode pkg/unicode/utf8 pkg/unicode/utf16 pkg/bytes pkg/math pkg/strings pkg/strconv pkg/bufio pkg/sort pkg/container/heap pkg/encoding/base64 pkg/syscall pkg/time pkg/os pkg/reflect pkg/fmt pkg/encoding pkg/encoding/json pkg/flag pkg/path/filepath pkg/path pkg/io/ioutil pkg/log pkg/regexp/syntax pkg/regexp pkg/go/token pkg/go/scanner pkg/go/ast pkg/go/parser pkg/os/exec pkg/os/signal pkg/net/url pkg/text/template/parse pkg/text/template pkg/go/doc pkg/go/build cmd/go fatal error: runtime: stack split during syscall runtime stack: runtime.throw(0x86674c) C:/go/root/src/pkg/runtime/panic.c:463 +0x67 runtime.newstack() C:/go/root/src/pkg/runtime/stack.c:261 +0x58e runtime.morestack() C:/go/root/src/pkg/runtime/asm_386.s:246 +0x59 goroutine 2 [stack split]: runtime.notetsleepg(0x31167f9c, 0xf8475800, 0xd) C:/go/root/src/pkg/runtime/lock_sema.c:254 +0x73 fp=0x31167f84 runtime.MHeap_Scavenger() C:/go/root/src/pkg/runtime/mheap.c:458 +0xc2 fp=0x31167fcc runtime.goexit() C:/go/root/src/pkg/runtime/proc.c:1405 fp=0x31167fd0 created by runtime.main C:/go/root/src/pkg/runtime/proc.c:179 goroutine 1 [running]: goroutine running on other thread; stack unavailable # Building packages and commands. fatal error: runtime: stack split during syscall runtime stack: runtime.throw(0x86674c) C:/go/root/src/pkg/runtime/panic.c:463 +0x67 runtime.newstack() C:/go/root/src/pkg/runtime/stack.c:261 +0x58e runtime.morestack() C:/go/root/src/pkg/runtime/asm_386.s:246 +0x59 goroutine 2 [stack split]: runtime.notetsleepg(0x31167f9c, 0xf8475800, 0xd) C:/go/root/src/pkg/runtime/lock_sema.c:254 +0x73 fp=0x31167f84 runtime.MHeap_Scavenger() C:/go/root/src/pkg/runtime/mheap.c:458 +0xc2 fp=0x31167fcc runtime.goexit() C:/go/root/src/pkg/runtime/proc.c:1405 fp=0x31167fd0 created by runtime.main C:/go/root/src/pkg/runtime/proc.c:179 goroutine 1 [running]: goroutine running on other thread; stack unavailable C:\go\root\src> https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_windows.c File src/pkg/runtime/os_windows.c (right): https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_window... src/pkg/runtime/os_windows.c:270: runtime·stdcall(runtime·QueryPerformanceFrequency, 1, &ticks); Please, check if it fails. And then panic here. https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_window... src/pkg/runtime/os_windows.c:281: #pragma textflag NOSPLIT Why did you mark it as NOSPLIT?
Sign in to reply to this message.
I'm unable to reproduce the fatal stack split on my machine. 386 and amd64 are both passing tests. I'm using MinGW[64] on a Win 7 64-bit machine. I guess if it doesn't get sorted out, I can remove the Windows portion of the patch and it can be done later. https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_windows.c File src/pkg/runtime/os_windows.c (right): https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_window... src/pkg/runtime/os_windows.c:270: runtime·stdcall(runtime·QueryPerformanceFrequency, 1, &ticks); > Please, check if it fails. And then panic here. Will do. What is difference and convention on panic() vs throw()? throw() can't be recovered? https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_window... src/pkg/runtime/os_windows.c:281: #pragma textflag NOSPLIT > Why did you mark it as NOSPLIT? Old nanotime() behavior got copied into now() because new nanotime() is changed. Since new now() is basically old nanotime() I figured I should copy the NOSPLIT as well.
Sign in to reply to this message.
On 2014/01/20 02:07:07, jayschwa wrote: > I'm unable to reproduce the fatal stack split on my machine. 386 and amd64 are > both passing tests. I'm using MinGW[64] on a Win 7 64-bit machine. I am on windows-386 here. Alex https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_windows.c File src/pkg/runtime/os_windows.c (right): https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_window... src/pkg/runtime/os_windows.c:270: runtime·stdcall(runtime·QueryPerformanceFrequency, 1, &ticks); On 2014/01/20 02:07:07, jayschwa wrote: > > Please, check if it fails. And then panic here. > > Will do. What is difference and convention on panic() vs throw()? throw() can't > be recovered? Perhaps I didn't explain myself properly - you are not checking for errors, you are assume that ticks will be 0, if error. Please check return value of runtime·QueryPerformanceFrequency, and, if it is an error, call runtime·throw(). https://codereview.appspot.com/53010043/diff/100001/src/pkg/runtime/os_window... src/pkg/runtime/os_windows.c:281: #pragma textflag NOSPLIT On 2014/01/20 02:07:07, jayschwa wrote: > > Why did you mark it as NOSPLIT? > > Old nanotime() behavior got copied into now() because new nanotime() is changed. > Since new now() is basically old nanotime() I figured I should copy the NOSPLIT > as well. I think you are wrong with this assumption. NOSPLIT is an instruction to the linker to avoid inserting "stack split" code at the start of now(). Why would new now() shouldn't use "stack splitting" code, just like most functions do?
Sign in to reply to this message.
I uploaded a change to check the return of QueryPerformanceFrequency. I am not sure how to go about debugging the stack split in syscall panic you are experiencing. Any ideas? Perhaps related to NOSPLIT? Thanks for taking the time to review and test.
Sign in to reply to this message.
I uploaded a change to check the return of QueryPerformanceFrequency. I am not sure how to go about debugging the stack split in syscall panic you are experiencing. Any ideas? Perhaps related to NOSPLIT? Thanks for taking the time to review and test.
Sign in to reply to this message.
On 2014/01/20 03:33:11, jayschwa(gmail) wrote: > ... > I am not sure how to go about debugging the stack split in syscall > panic you are experiencing. Any ideas? ... I don't know either. I need time to debug that. I will do that, if this CL gets accepted in principal. > ... Perhaps related to NOSPLIT? To your new NOSPLIT? I don't think so. Alex https://codereview.appspot.com/53010043/diff/120001/src/pkg/runtime/os_windows.c File src/pkg/runtime/os_windows.c (right): https://codereview.appspot.com/53010043/diff/120001/src/pkg/runtime/os_window... src/pkg/runtime/os_windows.c:272: else Remove "else". It is not needed. https://codereview.appspot.com/53010043/diff/120001/src/pkg/runtime/os_window... src/pkg/runtime/os_windows.c:280: #pragma textflag NOSPLIT You still have this in. Why?
Sign in to reply to this message.
https://codereview.appspot.com/53010043/diff/80001/src/pkg/runtime/os_windows.c File src/pkg/runtime/os_windows.c (right): https://codereview.appspot.com/53010043/diff/80001/src/pkg/runtime/os_windows... src/pkg/runtime/os_windows.c:272: nsec_per_tick = 1000000000LL / ticks; on 32 bits 64-bit division is compiled into stack split function look at how runtime·timediv is used in other os files
Sign in to reply to this message.
dvyukov, I have replaced 64-bit division with timediv(). Thanks for the tip. brainman, can you retry the latest update on your 32-bit machine if you have some time?
Sign in to reply to this message.
On 2014/01/25 23:59:34, jayschwa wrote: > ... > brainman, can you retry the latest update on your 32-bit machine ... It fails on my windows/386 with this message: ... testing/iotest testing/quick text/scanner # Testing packages. ? cmd/cgo [no test files] *** Test killed: ran too long (3m0s). FAIL cmd/fix 0.047s *** Test killed: ran too long (3m0s). FAIL cmd/go 0.156s *** Test killed: ran too long (3m0s). FAIL cmd/gofmt 0.047s *** Test killed: ran too long (3m0s). FAIL cmd/link 0.031s ? cmd/nm [no test files] *** Test killed: ran too long (3m0s). FAIL cmd/pack 0.109s ? cmd/yacc [no test files] *** Test killed: ran too long (3m0s). FAIL archive/tar 0.125s *** Test killed: ran too long (3m0s). FAIL archive/zip 0.172s *** Test killed: ran too long (3m0s). FAIL bufio 0.094s *** Test killed: ran too long (3m0s). FAIL bytes 0.141s *** Test killed: ran too long (3m0s). FAIL compress/bzip2 0.234s *** Test killed: ran too long (3m0s). FAIL compress/flate 0.031s *** Test killed: ran too long (3m0s). FAIL compress/gzip 0.219s *** Test killed: ran too long (3m0s). FAIL compress/lzw 0.188s *** Test killed: ran too long (3m0s). FAIL compress/zlib 0.141s *** Test killed: ran too long (3m0s). FAIL container/heap 0.172s *** Test killed: ran too long (3m0s). FAIL container/list 0.188s *** Test killed: ran too long (3m0s). FAIL container/ring 0.109s ? crypto [no test files] *** Test killed: ran too long (3m0s). FAIL crypto/aes 0.031s *** Test killed: ran too long (3m0s). FAIL crypto/cipher 0.125s *** Test killed: ran too long (3m0s). FAIL crypto/des 0.172s *** Test killed: ran too long (3m0s). FAIL crypto/dsa 0.203s *** Test killed: ran too long (3m0s). FAIL crypto/ecdsa 0.063s *** Test killed: ran too long (3m0s). FAIL crypto/elliptic 0.156s *** Test killed: ran too long (3m0s). FAIL crypto/hmac 0.156s *** Test killed: ran too long (3m0s). FAIL crypto/md5 0.078s *** Test killed: ran too long (3m0s). FAIL crypto/rand 0.063s *** Test killed: ran too long (3m0s). FAIL crypto/rc4 0.188s *** Test killed: ran too long (3m0s). FAIL crypto/rsa 0.031s *** Test killed: ran too long (3m0s). FAIL crypto/sha1 0.188s *** Test killed: ran too long (3m0s). FAIL crypto/sha256 0.031s *** Test killed: ran too long (3m0s). FAIL crypto/sha512 0.031s *** Test killed: ran too long (3m0s). FAIL crypto/subtle 0.016s *** Test killed: ran too long (3m0s). FAIL crypto/tls 0.484s *** Test killed: ran too long (3m0s). FAIL crypto/x509 0.109s ? crypto/x509/pkix [no test fil *** Test killed: ran too long (3m0s). FAIL database/sql 0.047s *** Test killed: ran too long (3m0s). FAIL database/sql/driver 0.047s *** Test killed: ran too long (3m0s). FAIL debug/dwarf 0.250s *** Test killed: ran too long (3m0s). FAIL debug/elf 0.047s *** Test killed: ran too long (3m0s). FAIL debug/goobj 0.031s *** Test killed: ran too long (3m0s). FAIL debug/gosym 0.109s *** Test killed: ran too long (3m0s). FAIL debug/macho 0.109s *** Test killed: ran too long (3m0s). FAIL debug/pe 0.031s *** Test killed: ran too long (3m0s). FAIL debug/plan9obj 0.094s ? encoding [no test files] *** Test killed: ran too long (3m0s). FAIL encoding/ascii85 0.031s *** Test killed: ran too long (3m0s). FAIL encoding/asn1 0.031s *** Test killed: ran too long (3m0s). FAIL encoding/base32 0.125s *** Test killed: ran too long (3m0s). FAIL encoding/base64 0.031s *** Test killed: ran too long (3m0s). FAIL encoding/binary 0.031s *** Test killed: ran too long (3m0s). FAIL encoding/csv 0.031s *** Test killed: ran too long (3m0s). FAIL encoding/gob 0.078s *** Test killed: ran too long (3m0s). FAIL encoding/hex 0.047s *** Test killed: ran too long (3m0s). FAIL encoding/json 0.188s *** Test killed: ran too long (3m0s). FAIL encoding/pem 0.031s *** Test killed: ran too long (3m0s). FAIL encoding/xml 0.156s *** Test killed: ran too long (3m0s). FAIL errors 0.031s *** Test killed: ran too long (3m0s). FAIL expvar 0.328s *** Test killed: ran too long (3m0s). FAIL flag 0.438s *** Test killed: ran too long (3m0s). FAIL fmt 0.047s *** Test killed: ran too long (3m0s). FAIL go/ast 0.469s *** Test killed: ran too long (3m0s). FAIL go/build 0.047s *** Test killed: ran too long (3m0s). FAIL go/doc 0.438s *** Test killed: ran too long (3m0s). FAIL go/format 0.047s *** Test killed: ran too long (3m0s). FAIL go/parser 0.078s *** Test killed: ran too long (3m0s). FAIL go/printer 0.047s *** Test killed: ran too long (3m0s). FAIL go/scanner 0.266s *** Test killed: ran too long (3m0s). FAIL go/token 0.297s ? hash [no test files] *** Test killed: ran too long (3m0s). FAIL hash/adler32 0.094s *** Test killed: ran too long (3m0s). FAIL hash/crc32 0.031s *** Test killed: ran too long (3m0s). FAIL hash/crc64 0.125s *** Test killed: ran too long (3m0s). FAIL hash/fnv 0.031s *** Test killed: ran too long (3m0s). FAIL html 0.016s *** Test killed: ran too long (3m0s). FAIL html/template 0.125s *** Test killed: ran too long (3m0s). FAIL image 0.031s *** Test killed: ran too long (3m0s). FAIL image/color 0.266s ? image/color/palette [no test fil *** Test killed: ran too long (3m0s). FAIL image/draw 0.031s *** Test killed: ran too long (3m0s). FAIL image/gif 0.141s *** Test killed: ran too long (3m0s). FAIL image/jpeg 0.047s *** Test killed: ran too long (3m0s). FAIL image/png 0.063s *** Test killed: ran too long (3m0s). FAIL index/suffixarray 0.188s *** Test killed: ran too long (3m0s). FAIL io 0.047s *** Test killed: ran too long (3m0s). FAIL io/ioutil 0.031s *** Test killed: ran too long (3m0s). FAIL log 0.047s ? log/syslog [no test files] *** Test killed: ran too long (3m0s). FAIL math 0.031s *** Test killed: ran too long (3m0s). FAIL math/big 0.188s *** Test killed: ran too long (3m0s). FAIL math/cmplx 0.047s *** Test killed: ran too long (3m0s). FAIL math/rand 0.047s *** Test killed: ran too long (3m0s). FAIL mime 0.094s *** Test killed: ran too long (3m0s). FAIL mime/multipart 0.047s *** Test killed: ran too long (3m0s). FAIL net 0.078s *** Test killed: ran too long (3m0s). FAIL net/http 0.141s *** Test killed: ran too long (3m0s). FAIL net/http/cgi 0.750s *** Test killed: ran too long (3m0s). FAIL net/http/cookiejar 0.578s *** Test killed: ran too long (3m0s). FAIL net/http/fcgi 0.109s *** Test killed: ran too long (3m0s). FAIL net/http/httptest 0.375s *** Test killed: ran too long (3m0s). FAIL net/http/httputil 0.234s ? net/http/pprof [no test files] *** Test killed: ran too long (3m0s). FAIL net/mail 0.031s *** Test killed: ran too long (3m0s). FAIL net/rpc 0.219s *** Test killed: ran too long (3m0s). FAIL net/rpc/jsonrpc 0.438s *** Test killed: ran too long (3m0s). FAIL net/smtp 0.281s *** Test killed: ran too long (3m0s). FAIL net/textproto 0.031s *** Test killed: ran too long (3m0s). FAIL net/url 1.047s *** Test killed: ran too long (3m0s). FAIL os 0.047s *** Test killed: ran too long (3m0s). FAIL os/exec 0.234s *** Test killed: ran too long (3m0s). FAIL os/signal 0.031s *** Test killed: ran too long (3m0s). FAIL os/user 0.344s *** Test killed: ran too long (3m0s). FAIL path 0.031s *** Test killed: ran too long (3m0s). FAIL path/filepath 0.063s *** Test killed: ran too long (3m0s). FAIL reflect 0.375s *** Test killed: ran too long (3m0s). FAIL regexp 0.031s *** Test killed: ran too long (3m0s). FAIL regexp/syntax 0.047s *** Test killed: ran too long (3m0s). FAIL runtime 0.234s ? runtime/cgo [no test files] *** Test killed: ran too long (3m0s). FAIL runtime/debug 0.031s *** Test killed: ran too long (3m0s). FAIL runtime/pprof 0.047s ? runtime/race [no test files] *** Test killed: ran too long (3m0s). FAIL sort 0.031s *** Test killed: ran too long (3m0s). FAIL strconv 0.031s *** Test killed: ran too long (3m0s). FAIL strings 0.031s *** Test killed: ran too long (3m0s). FAIL sync 0.031s *** Test killed: ran too long (3m0s). FAIL sync/atomic 0.094s *** Test killed: ran too long (3m0s). FAIL syscall 0.031s *** Test killed: ran too long (3m0s). FAIL testing 0.172s ? testing/iotest [no test files] *** Test killed: ran too long (3m0s). FAIL testing/quick 0.031s *** Test killed: ran too long (3m0s). FAIL text/scanner 0.188s *** Test killed: ran too long (3m0s). FAIL text/tabwriter 0.031s *** Test killed: ran too long (3m0s). FAIL text/template 0.250s *** Test killed: ran too long (3m0s). FAIL text/template/parse 0.047s *** Test killed: ran too long (3m0s). FAIL time 0.359s *** Test killed: ran too long (3m0s). FAIL unicode 0.031s *** Test killed: ran too long (3m0s). FAIL unicode/utf16 0.422s *** Test killed: ran too long (3m0s). FAIL unicode/utf8 0.031s ? unsafe [no test files] But succeeds on windows/amd64. Alex
Sign in to reply to this message.
Thanks for the report. I've seen that sort of error when nanotime() or now() (or both) return a junk value. I guess I just need to spend some time tracking down a 32-bit Windows VM or rig since testing windows/386 on 64-bit Windows isn't revealing all the gremlins.
Sign in to reply to this message.
Hmm, well I fired up a 32-bit Windows Server 2008 instance on EC2 and can't reproduce the new problem. Tests pass on it. Can you provide some more details about your setup? Hardware or VM? OS version? Thanks.
Sign in to reply to this message.
On 2014/01/29 04:48:42, jayschwa wrote: > > Can you provide some more details about your setup? ... What are you interested in? > Hardware or VM? Hardware. > OS version? Windows XP SP3 Alex
Sign in to reply to this message.
On 2014/01/29 06:02:59, brainman wrote: > On 2014/01/29 04:48:42, jayschwa wrote: > > > > Can you provide some more details about your setup? ... > > What are you interested in? > > > Hardware or VM? > > Hardware. > > > OS version? > > Windows XP > SP3 > > Alex Hi, I'm eager to see this CL submitted. Brainman, perhaps you have hardware that is buggy with QueryPerformanceCounter ? According to Microsoft some hardware has problems with performance counters jumping forward into the future[1]. I haven't looked at the details of this CL to know for certain if this is the problem you are experiencing however. How does this CL intend to handle issues regarding QueryPerformanceCounter jumping into the future[1] ? I am guessing it doesn't do anything to work around it, but perhaps we should work around these buggy chipsets somehow? According to an older 2008 forum post[2]: "in Windows XP, all AMD Athlon X2 dual core CPUs return the PC of either of the cores "randomly" (the PC sometimes jumps a bit backwards), unless you specially install AMD dual core driver package to fix the issue." I'm not sure what Go should do, especially given that it is buggy hardware. In a package I wrote a while ago for monotonic clock values on Windows I compared the QueryPerformanceCounter values with those of non-monotonic clock values to essentially fall back to non-monotonic time on these buggy chipsets, but I couldn't ever get my hands on buggy hardware to test with and so I had no clue if it even worked. [1] http://support.microsoft.com/kb/274323 [2] http://forum.beyond3d.com/showthread.php?t=47951
Sign in to reply to this message.
On 2014/02/05 05:29:11, stephen.gutekanst wrote: > > ... Brainman, perhaps you have hardware that is > buggy with QueryPerformanceCounter ? > > According to Microsoft some hardware has problems with performance counters > jumping forward into the future[1]. ... I checked my PCI IDs. They aren't listed on [1]. > > According to an older 2008 forum post[2]: "in Windows XP, all AMD Athlon X2 dual > core CPUs return the PC of either of the cores "randomly" ... My CPU is not AMD Athlon X2. Alex
Sign in to reply to this message.
Whoa, TSC alert. The TSC cannot be reliably used from user space without elaborate precautions. Is that really what Windows provides ? On Wed, Feb 5, 2014 at 4:29 PM, <stephen.gutekanst@gmail.com> wrote: > On 2014/01/29 06:02:59, brainman wrote: > >> On 2014/01/29 04:48:42, jayschwa wrote: >> > >> > Can you provide some more details about your setup? ... >> > > What are you interested in? >> > > > Hardware or VM? >> > > Hardware. >> > > > OS version? >> > > Windows XP >> SP3 >> > > Alex >> > > Hi, > > I'm eager to see this CL submitted. Brainman, perhaps you have hardware > that is buggy with QueryPerformanceCounter ? > > According to Microsoft some hardware has problems with performance > counters jumping forward into the future[1]. I haven't looked at the > details of this CL to know for certain if this is the problem you are > experiencing however. > > How does this CL intend to handle issues regarding > QueryPerformanceCounter jumping into the future[1] ? I am guessing it > doesn't do anything to work around it, but perhaps we should work around > these buggy chipsets somehow? > > According to an older 2008 forum post[2]: "in Windows XP, all AMD Athlon > X2 dual core CPUs return the PC of either of the cores "randomly" (the > PC sometimes jumps a bit backwards), unless you specially install AMD > dual core driver package to fix the issue." > > I'm not sure what Go should do, especially given that it is buggy > hardware. In a package I wrote a while ago for monotonic clock values on > Windows I compared the QueryPerformanceCounter values with those of > non-monotonic clock values to essentially fall back to non-monotonic > time on these buggy chipsets, but I couldn't ever get my hands on buggy > hardware to test with and so I had no clue if it even worked. > > [1] http://support.microsoft.com/kb/274323 > > [2] http://forum.beyond3d.com/showthread.php?t=47951 > > > > https://codereview.appspot.com/53010043/ > > -- > You received this message because you are subscribed to the Google Groups > "golang-codereviews" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to golang-codereviews+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. >
Sign in to reply to this message.
Try this one: http://play.golang.org/p/200I8kC3Da It is not subject to jumps and gives you 1ms precision (since we call timeBeginPeriod) which is the same as current time.Now precision. And, yes, it is a way faster. The main challenge is to test it on different versions on windows. On older 32-bit versions of XP I was using *(unsigned*)0x7FFE0000 to read current time.
Sign in to reply to this message.
On 2014/02/05 06:16:57, dvyukov wrote: > ... I was using *(unsigned*)0x7FFE0000 to read > current time. Yeeeeeeha! Syscalls are for sissies. Alex
Sign in to reply to this message.
On 2014/02/05 05:29:11, stephen.gutekanst wrote: > How does this CL intend to handle issues regarding QueryPerformanceCounter > jumping into the future[1] ? I'm inclined to say it won't and revert the Windows changes. My main goal with this CL was to normalize timer inputs to the same clock function, nanotime(). Platform-specific implementations of monotonic nanotime() can always go in future CLs. Whether it goes in this CL or a future one, I'll give some thought to falling back to lower resolution GetTickCount if QueryPerformanceCounter anomalies are detected. Thanks all. - Jay
Sign in to reply to this message.
Please revert the windows changes and then we can submit this CL as is.
Sign in to reply to this message.
On 2014/02/12 18:05:55, rsc wrote: > Please revert the windows changes and then we can submit this CL as is. Done.
Sign in to reply to this message.
LGTM again
Sign in to reply to this message.
go-dev.appspot.com says the status of this review is "waiting for author". Is there some additional review process step I need to do before it can be submitted? - Jay
Sign in to reply to this message.
LGTM
Sign in to reply to this message.
I tried to submit this but it fails on my Mac. At first glance I don't see what this change is doing on the Mac, but clearly something. --- FAIL: TestVariousDeadlines4Proc (2.00 seconds) timeout_test.go:480: 1ns run 1/1 timeout_test.go:503: for 1ns run 1/1, good client timeout after 5.407us, reading 0 bytes timeout_test.go:513: for 1ns run 1/1: server in 135.082us wrote 32768, write tcp 127.0.0.1:50347: broken pipe timeout_test.go:480: 2ns run 1/1 timeout_test.go:503: for 2ns run 1/1, good client timeout after 7.747us, reading 0 bytes timeout_test.go:513: for 2ns run 1/1: server in 120.347us wrote 32768, write tcp 127.0.0.1:50348: broken pipe timeout_test.go:480: 5ns run 1/1 timeout_test.go:503: for 5ns run 1/1, good client timeout after 4.479us, reading 0 bytes timeout_test.go:513: for 5ns run 1/1: server in 135.568us wrote 32768, write tcp 127.0.0.1:50349: broken pipe timeout_test.go:480: 50ns run 1/1 timeout_test.go:503: for 50ns run 1/1, good client timeout after 4.216us, reading 0 bytes timeout_test.go:513: for 50ns run 1/1: server in 124.96us wrote 32768, write tcp 127.0.0.1:50350: broken pipe timeout_test.go:480: 100ns run 1/1 timeout_test.go:503: for 100ns run 1/1, good client timeout after 18.228us, reading 0 bytes timeout_test.go:513: for 100ns run 1/1: server in 133.004us wrote 32768, write tcp 127.0.0.1:50351: broken pipe timeout_test.go:480: 200ns run 1/1 timeout_test.go:503: for 200ns run 1/1, good client timeout after 4.441us, reading 0 bytes timeout_test.go:513: for 200ns run 1/1: server in 140.14us wrote 32768, write tcp 127.0.0.1:50352: broken pipe timeout_test.go:480: 500ns run 1/1 timeout_test.go:503: for 500ns run 1/1, good client timeout after 4.231us, reading 0 bytes timeout_test.go:513: for 500ns run 1/1: server in 130.034us wrote 32768, write tcp 127.0.0.1:50353: broken pipe timeout_test.go:480: 750ns run 1/1 timeout_test.go:503: for 750ns run 1/1, good client timeout after 3.184us, reading 0 bytes timeout_test.go:513: for 750ns run 1/1: server in 117.183us wrote 32768, write tcp 127.0.0.1:50354: broken pipe timeout_test.go:480: 1us run 1/1 timeout_test.go:503: for 1us run 1/1, good client timeout after 81.945us, reading 0 bytes timeout_test.go:517: for 1us run 1/1, timeout waiting for server to finish writing FAIL FAIL net 5.201s
Sign in to reply to this message.
I'm unable to reproduce. A fresh clone, clpatch, and all.bash yields ALL TESTS PASSED on my Macbook. (Core i7 [amd64], OS X 10.9.1) The errors you see seem like they'd be related to the deadline translation in net/fd_poll_runtime.go.
Sign in to reply to this message.
After further testing, I can intermittently reproduce the errors by manually running `go test net` several times. Sometimes it passes and sometimes it fails. However, I think the problem is independent of this CL because it happens regardless of whether I have this CL loaded or not.
Sign in to reply to this message.
Please make sure which test case fails. If that's TestVariousDeadlines1Proc or TestVariousDeadlines4Proc, no worries. Otherwise, something wrong. Both TestVariousDeadlines1Proc and TestVariousDeadlines4Proc are fragile, depend on some buffered stuff inside the kernel. For example, when you change some TCP related kernel states such as window size, buffer size and/or socket buffer size that all land on mbuf of BSDs or skbuff of Linux, tests will reveal several faces of the kernel. I have no clue how to fix this but a workaround would be throttling the test data flow by using SetBufferSize. Of course SetBufferSize is not a perfect API to control underlying dynamic layered buffer stuff (its behavior is different btw kernels and/or kernel versions), but probably. On Sat, Feb 22, 2014 at 7:52 AM, <jay@jayschwa.net> wrote: > After further testing, I can intermittently reproduce the errors by > manually running `go test net` several times. Sometimes it passes and > sometimes it fails. However, I think the problem is independent of this > CL because it happens regardless of whether I have this CL loaded or > not. > > > https://codereview.appspot.com/53010043/ > > -- > You received this message because you are subscribed to the Google Groups > "golang-codereviews" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to golang-codereviews+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out.
Sign in to reply to this message.
On 2014/02/22 01:20:33, mikio wrote: > TestVariousDeadlines1Proc or TestVariousDeadlines4Proc Those are indeed the ones I see with intermittent failures (with or without the CL applied). Thanks for the info.
Sign in to reply to this message.
*** Submitted as https://code.google.com/p/go/source/detail?r=79f855ac890d *** runtime: use monotonic clock for timers (linux/386, linux/amd64) This lays the groundwork for making Go robust when the system's calendar time jumps around. All input values to the runtimeTimer struct now use the runtime clock as a common reference point. This affects net.Conn.Set[Read|Write]Deadline(), time.Sleep(), time.Timer, etc. Under normal conditions, behavior is unchanged. Each platform and architecture's implementation of runtime·nanotime() should be modified to use a monotonic system clock when possible. Platforms/architectures modified and tested with monotonic clock: linux/x86 - clock_gettime(CLOCK_MONOTONIC) Update issue 6007 LGTM=dvyukov, rsc R=golang-codereviews, dvyukov, alex.brainman, stephen.gutekanst, dave, rsc, mikioh.mikioh CC=golang-codereviews https://codereview.appspot.com/53010043 Committer: Russ Cox <rsc@golang.org>
Sign in to reply to this message.
|