|
|
Descriptionencoding/json: Speed up decoding by 50%
* Avoid recalculating len(data) while decoding.
* Avoid excessive byte to int castings.
* Avoid creating temporary slices in nextValue(), returning
an offset suffices.
* Cache key <=> field lookups. They use strings.EqualFold,
which is pretty slow.
* Avoid converting keys to strings when possible.
Benchmarks:
benchmark old ns/op new ns/op delta
BenchmarkCodeUnmarshal 146540233 98708913 -32.64%
BenchmarkCodeUnmarshalMethod 148930131 98619151 -33.78%
BenchmarkCodeUnmarshalReuse 143297363 90978323 -36.51%
benchmark old MB/s new MB/s speedup
BenchmarkCodeUnmarshal 13.24 19.66 1.48x
BenchmarkCodeUnmarshalMethod 13.03 19.68 1.51x
BenchmarkCodeUnmarshalReuse 13.54 21.33 1.58x
benchmark old allocs new allocs delta
BenchmarkCodeUnmarshal 195377 105750 -45.87%
BenchmarkCodeUnmarshalMethod 195376 105748 -45.87%
BenchmarkCodeUnmarshalReuse 180345 89892 -50.16%
benchmark old bytes new bytes delta
BenchmarkCodeUnmarshal 4274092 3457209 -19.11%
BenchmarkCodeUnmarshalMethod 4273988 3456867 -19.12%
BenchmarkCodeUnmarshalReuse 2937147 2047975 -30.27%
Patch Set 1 : diff -r 3fd9ca3ab815 https://code.google.com/p/go #Patch Set 2 : diff -r 3fd9ca3ab815 https://code.google.com/p/go #
Total comments: 5
MessagesTotal messages: 25
Hello golang-dev@googlegroups.com, I'd like you to review this change to https://code.google.com/p/go
Sign in to reply to this message.
Here are some benchmarks before and after applying this patch set: Before: BenchmarkCodeUnmarshal 10 156293404 ns/op 12.42 MB/s BenchmarkCodeUnmarshalReuse 10 150102989 ns/op After: BenchmarkCodeUnmarshal 20 97127556 ns/op 19.98 MB/s BenchmarkCodeUnmarshalMethod 20 97804487 ns/op 19.84 MB/s BenchmarkCodeUnmarshalReuse 20 89162147 ns/op 21.76 MB/s The difference is quite substantial. Regards, Alberto
Sign in to reply to this message.
Please use misc/benchcmp to compare benchmark output, and include it in the CL description.
Sign in to reply to this message.
Is it possible for you to post CPU profiles? I would like to know if the improvement is consistent across various stack splitting positions. Rémy. 2013/8/1, David Symonds <dsymonds@golang.org>: > Please use misc/benchcmp to compare benchmark output, and include it in the > CL description. > > -- > > --- > You received this message because you are subscribed to the Google Groups > "golang-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to golang-dev+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > >
Sign in to reply to this message.
On 2013/08/01 10:36:18, dsymonds wrote: > Please use misc/benchcmp to compare benchmark output, and include it in the > CL description. Thanks, done!
Sign in to reply to this message.
On 2013/08/01 10:50:52, remyoudompheng wrote: > Is it possible for you to post CPU profiles? > I would like to know if the improvement is consistent across various > stack splitting positions. > > Rémy. Sure, no problem. Here are the CPU profiles for BenchmarkCodeUnmarshal before and after: fiam@ubuntu:~/go/src/pkg/encoding/json$ go test -v -run=none -bench=CodeUnmarshal$ -cpuprofile=cpu.old.prof PASS BenchmarkCodeUnmarshal 10 149568191 ns/op 12.97 MB/s https://dl.dropboxusercontent.com/u/3193787/cpu.old.prof fiam@ubuntu:~/go/src/pkg/encoding/json$ go test -v -run=none -bench=CodeUnmarshal$ -cpuprofile=cpu.new.prof PASS BenchmarkCodeUnmarshal 20 96215951 ns/op 20.17 MB/s ok encoding/json 2.257s https://dl.dropboxusercontent.com/u/3193787/cpu.new.prof Regards, Alberto
Sign in to reply to this message.
I'd like to see the incremental effect of each change included in this CL, rather than bundling them all together. Particularly the unsafe change, if we even want to contemplate that one. https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... File src/pkg/encoding/json/decode.go (right): https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... src/pkg/encoding/json/decode.go:21: "unsafe" I think this is a no-no. We don't want to add new unsafe packages.
Sign in to reply to this message.
On 2013/08/01 13:45:19, rog wrote: > I'd like to see the incremental effect of each > change included in this CL, rather than bundling > them all together. Particularly the unsafe change, > if we even want to contemplate that one. The difference is significant for that change, specially when it comes to allocations: fiam@ubuntu:~/go/src/pkg/encoding/json$ ~/go/misc/benchcmp unsafe.txt safe.txt benchmark old ns/op new ns/op delta BenchmarkCodeUnmarshal 82400716 90397421 +9.70% BenchmarkCodeUnmarshalMethod 85276712 92357337 +8.30% BenchmarkCodeUnmarshalReuse 76710096 84389386 +10.01% benchmark old MB/s new MB/s speedup BenchmarkCodeUnmarshal 23.55 21.47 0.91x BenchmarkCodeUnmarshalMethod 22.76 21.01 0.92x BenchmarkCodeUnmarshalReuse 25.30 22.99 0.91x benchmark old allocs new allocs delta BenchmarkCodeUnmarshal 105749 195386 84.76% BenchmarkCodeUnmarshalMethod 105748 195384 84.76% BenchmarkCodeUnmarshalReuse 89892 179521 99.71% benchmark old bytes new bytes delta BenchmarkCodeUnmarshal 3457194 4275326 23.66% BenchmarkCodeUnmarshalMethod 3456840 4274932 23.67% BenchmarkCodeUnmarshalReuse 2047975 2864786 39.88% > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... > File src/pkg/encoding/json/decode.go (right): > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... > src/pkg/encoding/json/decode.go:21: "unsafe" > I think this is a no-no. We don't want to > add new unsafe packages. If there's a big concern, I could a "safe" version of the string to []byte conversion which could be conditionally compiled using build tags. I really can't find any valid reason to not take advantage of unsafe package in this situation, because the string is thrown away as soon as the lookup in the map is completed. Of course, if the underlying []byte is modified by another goroutine while the lookup is taking place, the result of the lookup is going to be wrong, but if you're modifying the []byte while parsing it as JSON (you definitely shouldn't!) the deserialization is also going to be wrong. Regards, Alberto
Sign in to reply to this message.
Do not use the unsafe package just to gain performance in the standard libraries. -rob
Sign in to reply to this message.
Brad also has a pending CL to make json faster. I think we all agree it can be faster. I'd like some more time to think about what the best approach is. Thanks. Russ
Sign in to reply to this message.
On 2013/08/01 14:47:28, rsc wrote: > Brad also has a pending CL to make json faster. > I think we all agree it can be faster. > I'd like some more time to think about what the best approach is. > > Thanks. > Russ Brad's CL is about encoding, this one speeds up decoding. Regards, Alberto
Sign in to reply to this message.
Everything I wrote still applies. I didn't mention encoding vs decoding.
Sign in to reply to this message.
On 2013/08/01 14:45:42, r wrote: > Do not use the unsafe package just to gain performance in the standard > libraries. > > -rob unsafe it's used by math and encoding/binary just to gain performance. Why can't encoding/json use it to gain performance too?
Sign in to reply to this message.
Any ETA on that thinking? Or guidelines for what class of performance improvements (if any) are acceptable? In the meantime I've just stopped doing things related to performance or garbage. My JSON encoding CL just does what encoding/gob does, but without unsafe. (caching per-Type encoders) On Thu, Aug 1, 2013 at 7:47 AM, Russ Cox <rsc@golang.org> wrote: > Brad also has a pending CL to make json faster. > I think we all agree it can be faster. > I'd like some more time to think about what the best approach is. > > Thanks. > Russ > > -- > > --- > You received this message because you are subscribed to the Google Groups > "golang-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to golang-dev+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > >
Sign in to reply to this message.
Because I don't want any more of this in the library. Because I'd rather see the compiler improve. Because packages that use unsafe can't be deployed in some environments, which means we need two versions of the code, which makes the code harder to maintain and test. Because it's ugly and unreadable. Most important, because it's unsafe. -rob On Fri, Aug 2, 2013 at 1:27 AM, <alberto.garcia.hierro@gmail.com> wrote: > On 2013/08/01 14:45:42, r wrote: > >> Do not use the unsafe package just to gain performance in the standard >> libraries. >> > > -rob >> > > unsafe it's used by math and encoding/binary just to gain performance. > Why can't encoding/json use it to gain performance too? > > https://codereview.appspot.**com/12243043/<https://codereview.appspot.com/122... >
Sign in to reply to this message.
On 2013/08/01 21:43:30, r wrote: > Because I don't want any more of this in the library. Because I'd rather > see the compiler improve. Because packages that use unsafe can't be > deployed in some environments, which means we need two versions of the > code, which makes the code harder to maintain and test. Because it's ugly > and unreadable. > > Most important, because it's unsafe. > The only environment that I know of which doesn't allow package unsafe to be used is App Engine, and that's for user code, not standard library code (otherwise, the math package would have a safe version of Float32bits and the other functions in math/unsafe.go). Furthermore, I'm sure the majority of Go users don't care about App Engine (but obviously Google does). I'd also love the compiler to improve when it comes to []byte to string conversion, but we have to be pragmatic in the meantime. The usage of the unsafe package we're discussing involves *one line*, which speeds up the code significantly and reduces the number of allocations by 50%, which in turn benefits real world code immensely because the pressure on the GC is reduced. Heck, most people would call that an epic win. In an ideal world, the compiler would produce a []byte to string conversion without copying and/or the runtime would have no problem dealing with the additional garbage generated, but it's not the case right now. By adding one ugly line we can ship a way better JSON decoder which is gonna help a lot of Go users. When the compiler and/or runtime improve, that line can be changed to a simple cast and the code will be beautiful again, and everyone will be happy. In the meantime, I take full responsibility for that ugly line of code. You can punch me in the face once for every problem it causes down the line. And, yes, you can get that on writing, signed by me. Regards, Alberto
Sign in to reply to this message.
On 2013/08/01 15:24:06, rsc wrote: > Everything I wrote still applies. I didn't mention encoding vs decoding. No worries, I got that. Just wanted to make clear that, while the two CLs are about speeding up JSON, they touch different parts of the package. Regards, Alberto
Sign in to reply to this message.
It was a mistake to export reflect.StructHeader. It's another mistake to build on that mistake. I stand by what I wrote before. -rob
Sign in to reply to this message.
> In the meantime, I take full responsibility for that ugly line of code. You can > punch me in the face once for every > problem it causes down the line. And, yes, you can get that on writing, signed > by me. I'm sorry, but that's not how this works. We are the maintainers of the Go standard library code, and some of the things we maintain are the cleanliness, portability, correctness, and debugability of the code. That's one of the significant strengths of the standard library. We often push back on changes that we believe will hurt those things. An appropriate response is to find a way to achieve your goal while addressing whatever concerns we have raised. You cannot "take full responsibility" for the code. We're the ones who are going to be debugging it later, not you. Russ
Sign in to reply to this message.
I don't understand your first bullet in the CL description "avoid recalculating len(data)". I can't find what part of the code you are trying to describe with that, and also len(data) is a variable, not a calculation. Please cut this CL down to just the name cache, and once that is settled we can worry about whether the other things matter. Thanks. https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... File src/pkg/encoding/json/decode.go (right): https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... src/pkg/encoding/json/decode.go:65: // Don't check for well-formedness beforehand. It This is even worse than using unsafe. You are changing the semantics of the function. I disagree with the new semantics, but even if I agreed, we can't make a change like this before Go 2. And you didn't even mention this part in the CL description. https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... src/pkg/encoding/json/decode.go:562: // WARNING: This []byte to string conversion is the fastest one This is not okay. Please stop arguing with us about it. You changed the type of key above from string to []byte. Change it back to string, and then this is fine. Also, you can define type fieldKey struct { typ reflect.Type name string } and then use a single map[fieldKey]*field. https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... File src/pkg/encoding/json/decode_test.go (right): https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... src/pkg/encoding/json/decode_test.go:618: if isSpace(byte(c)) { Please revert all these conversions. Perhaps byte is faster on some compilers, but it is no doubt slower on others, and there's no reason for this churn. Measure this change by itself and I think you will find that it is a wash. If not, there is a compiler bug to fix. https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/scann... File src/pkg/encoding/json/scanner.go (right): https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/scann... src/pkg/encoding/json/scanner.go:37: func nextValue(data []byte, scan *scanner) (idx int, err error) { Another rewrite that may or may not be important by itself. If you want to measure it by itself and submit it as a separate CL, that's fine. One of the problems with this CL is that it does too much, and we can't tell what is a win and what is noise. Please make this CL only about the name lookup cache, since that's what we've spent the most time discussing.
Sign in to reply to this message.
On 2013/08/06 17:05:10, rsc wrote: > I don't understand your first bullet in the CL description "avoid recalculating > len(data)". I can't find what part of the code you are trying to describe with > that, and also len(data) is a variable, not a calculation. It should have said len(d.data), my mistake. When I first profiled the code, decodeState.scanWhile() was consuming a very significant amount of time and was calling len(d.data) on every iteration. I changed the code to call len(d.data) once, on the decodeState initialization, and store in in a field in decodeState. This increased performance, but I can't remember exactly by how much. > > Please cut this CL down to just the name cache, and once that is settled we can > worry about whether the other things matter. > > Thanks. > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... > File src/pkg/encoding/json/decode.go (right): > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... > src/pkg/encoding/json/decode.go:65: // Don't check for well-formedness > beforehand. It > This is even worse than using unsafe. You are changing the semantics of the > function. I disagree with the new semantics, but even if I agreed, we can't make > a change like this before Go 2. And you didn't even mention this part in the CL > description. It's not changing the semantics of the function. The returned error in the case of invalid syntax errors is going to be exactly the same than before. However, the JSON is not checked before-hand, but rather when parsing it. This speeds up parsing valid JSON by ~20%, slows down invalid JSON by ~15% and keeps the function semantics. There's some extra bookkeeping in the parsing code, because it can no longer assume that the JSON is valid, but since most of the time the JSON is going to be valid, it's a good tradeoff for real-word applications. > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... > src/pkg/encoding/json/decode.go:562: // WARNING: This []byte to string > conversion is the fastest one > This is not okay. Please stop arguing with us about it. > You changed the type of key above from string to []byte. > Change it back to string, and then this is fine. > Also, you can define > > type fieldKey struct { > typ reflect.Type > name string > } > > and then use a single map[fieldKey]*field. I've already changed that in my local copy. I'm now preloading the cache for the reflect.Type (e.g. typeCache := fieldCache[typ] before entering the loop which obtains the keys, which just does field := typeCache[key]. This way there's one lookup for the reflect.Type and then only one lookup for each key (rather than 2 lookups per key, like this CL does). I'm also keeping the typeCache in a field when parsing slices, to avoid as many lookups as possible. > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... > File src/pkg/encoding/json/decode_test.go (right): > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/decod... > src/pkg/encoding/json/decode_test.go:618: if isSpace(byte(c)) { > Please revert all these conversions. Perhaps byte is faster on some compilers, > but it is no doubt slower on others, and there's no reason for this churn. > Measure this change by itself and I think you will find that it is a wash. If > not, there is a compiler bug to fix. I didn't add any conversions, I removed them. The code was casting byte to int and then to rune all over the place, calling isSpace(rune(c)). The only cast left is in that test. I measured it and it made a 1-2% difference. It's not a big one, but there's no good reason to cast a byte to a int and then to a rune when both isSpace() and the step functions can work with bytes directly. > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/scann... > File src/pkg/encoding/json/scanner.go (right): > > https://codereview.appspot.com/12243043/diff/5001/src/pkg/encoding/json/scann... > src/pkg/encoding/json/scanner.go:37: func nextValue(data []byte, scan *scanner) > (idx int, err error) { > Another rewrite that may or may not be important by itself. > If you want to measure it by itself and submit it as a separate CL, that's fine. > One of the problems with this CL is that it does too much, and we can't tell > what is a win and what is noise. > > Please make this CL only about the name lookup cache, since that's what we've > spent the most time discussing. I'm sorry, but I don't have the time nor the desire to do that. I just want my apps to parse JSON more efficiently. With your current development model, if I have to send a CL for a small change, wait until it's accepted (usually weeks) and then make another small change. Rinse, repeat. This wastes a lot of my time and between once CL and the next one I might have even forgot about I was doing. That's why this CL is a big one and my intention was making it even bigger, by adding a lot more optimizations I have in my local copy. In fact, by now I have rewritten most of the code to make the scanner event based rather than byte based. The current scanner is fed one byte a a time. On the other hand, the implementation I have right now receives all the data and reads and decodes as much JSON as possible on every call. This makes it several times faster (I haven't finished optimizing it yet, but right now it's 3-4x faster than encoding/json). I'll drop this CL now, since the kind of changes that I plan to make don't seem to be welcome and, at this point, it's easier for me to just keep the code in our internal Go fork. Regards, Alberto
Sign in to reply to this message.
*** Abandoned ***
Sign in to reply to this message.
On Tue, Aug 6, 2013 at 2:58 PM, <alberto.garcia.hierro@gmail.com> wrote: > I'll drop this CL now, since the kind of changes that I plan to make > don't seem to be welcome and, at this point, it's easier for me to just > keep the code in our internal Go fork. > That's certainly works. There are three main reasons to get something into the standard library: 1) get a lot of users (it's there by default) 2) get a thorough review 3) ease maintenance burden on you (no need for an internal version) I don't know what your internal review process is like, nor how many users you have internally, but if you're going to maintain a fork anyway, you might want to consider putting your forked copy of encoding/json on github. There you'd get more eyeballs & bug reports & users. The risk is that it could split the community and people would use your "fastjson" package instead of the standard library's encoding/json. I think that's acceptable, if we're really that much slower. In any case, the license permits that.
Sign in to reply to this message.
Message was sent while issue was closed.
On 2013/08/07 01:45:53, bradfitz wrote: > I don't know what your internal review process is like, nor how many users > you have internally, but if you're going to maintain a fork anyway, you > might want to consider putting your forked copy of encoding/json on github. > There you'd get more eyeballs & bug reports & users. Our fork is just used at Rainy Cape (my company) to build our websites. Our review process is basically if it works, has good test coverage and the code does nothing stupid, it's fine for us (we are perfectly fine with our code using package unsafe for speed, since we don't use App Engine). Right now, there are 5 developers besides me using it, so the user base is not big at all. I try to submit all our changes to the standard packages and the compilers as CLs, but due to your development process it takes a lot of time. e.g. we have had support for calling C function pointers for almost 2 months now. I submitted a CL for adding support for C function pointer variables (since the code for calling them wasn't tested enough at the time) 2 months and a week ago and it's still sitting there. If it gets accepted, then I have to dig up our commit (or commits) which added support for calling C function pointers, its tests, etc… apply it to a clean Go checkout, check any open Go issues that it might fix and submit it as a CL. Again, this takes a lot of time and some times the reviews are not very friendly (e.g. some reviews feel like "this is our language and we get to decide, so shut up now. Oh, and you're stupid for suggesting that"). Lately, I've opted for adding most of the code we develop to our internal web framework (which we hope to release some day), rather than submitting it for inclusion in Go. e.g. we have full i18n support, which extracts the strings from Go code, generates standard .po files and them compiles those .po files to .go files rather than .mo. This way, packages with translations are still "go get-able". As you may know, po translation files include a formula for choosing a plural form for a given number. I initially wrote a simple register based VM to interpret those formulas, but it felt kind of slow (it took ~2x time than go compiled code), so I wrote a JIT for it which even outperforms the gc compiler (for these formulas, which are quite simple). If I were to submit this for inclusion in Go, I'm pretty sure the JIT wouldn't get accepted and I would probably have to argue for weeks or months about every choice I made during the design of the i18n support. It's really not worth it. What I'm trying to say is that I feel that the process of contributing code to Go as an outsider to Google is way more difficult than it should be and, in the long term, this hurts the Go ecosystem. Regards, Alberto
Sign in to reply to this message.
On Wed, Aug 7, 2013 at 5:12 AM, <alberto.garcia.hierro@gmail.com> wrote: > > I try to submit all our changes to the standard packages and the > compilers as CLs, but due to your development process it takes a lot of > time. e.g. we have had support for calling C function pointers for > almost 2 months now. I submitted a CL for adding support for C function > pointer variables (since the code for calling them wasn't tested enough > at the time) 2 months and a week ago and it's still sitting there. If it > gets accepted, then I have to dig up our commit (or commits) which added > support for calling C function pointers, its tests, etc… apply it to a > clean Go checkout, check any open Go issues that it might fix and submit > it as a CL. Again, this takes a lot of time and some times the reviews > are not very friendly (e.g. some reviews feel like "this is our language > and we get to decide, so shut up now. Oh, and you're stupid for > suggesting that"). > > Lately, I've opted for adding most of the code we develop to our > internal web framework (which we hope to release some day), rather than > submitting it for inclusion in Go. e.g. we have full i18n support, which > extracts the strings from Go code, generates standard .po files and them > compiles those .po files to .go files rather than .mo. This way, > packages with translations are still "go get-able". As you may know, po > translation files include a formula for choosing a plural form for a > given number. I initially wrote a simple register based VM to interpret > those formulas, but it felt kind of slow (it took ~2x time than go > compiled code), so I wrote a JIT for it which even outperforms the gc > compiler (for these formulas, which are quite simple). If I were to > submit this for inclusion in Go, I'm pretty sure the JIT wouldn't get > accepted and I would probably have to argue for weeks or months about > every choice I made during the design of the i18n support. It's really > not worth it. > > What I'm trying to say is that I feel that the process of contributing > code to Go as an outsider to Google is way more difficult than it should > be and, in the long term, this hurts the Go ecosystem. I'm sorry you find the process frustrating. You're not alone in that. We do tend to move slowly and incrementally in the standard repository, because we know that decisions made there last a long time, and require maintenance. Significant changes to the standard repository are discussed on golang-dev before they become a CL, and many of them have a design document. Your function pointer CL did get a reply 5 days ago. I agree it sat there too long. That long-delayed comment was reasonable: significant new functionality like your CL does require a test. I also just added a few more comments to the CL and I hope you will push it forward. Your i18n code sounds useful for many people. It wouldn't go into the standard repository in any case--the i18n support is currently in the separate go.text repository (http://code.google.com/p/go.text). If you would like to send a design doc to golang-dev, I think it would be well received, but, you're right, people would discuss it for a while. I hope you will at least consider releasing it yourself on github or wherever. I want to be clear that this is not an inside/outside Google thing; the same issues arise for people inside Google. There are ways to get changes in: discuss the design first, send small incremental CLs, build a track record of success. But, yes, it's slow and often frustrating. It's not a free-for-all for anybody. I don't have any real comments on this encoding/json CL except to observe that although App Engine is understandably not important for you, it is important for many people. We simply can't change a fundamental package like encoding/json such that it can not be used on App Engine. That's a non-starter. I hope you can continue to use Go happily. Ian
Sign in to reply to this message.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
