Looks pretty good. If we're doing this to make []byte handling more efficient, let's clean ...
14 years, 2 months ago
(2011-02-17 17:58:08 UTC)
#2
Looks pretty good. If we're doing this to make []byte
handling more efficient, let's clean up a few other
inefficiencies here too.
In decode, please add an unquoteBytes that returns []byte and
make unquote a wrapper around unquoteBytes.
In unquoteBytes, it would be nice to keep track of
whether any changes need to be made and delay
the allocation of the new []byte b until that is a certainty.
In the common case unquoteBytes can return s itself.
Then pick off []byte earlier in literal so that a base64
decode has 0 incidental mallocs as compared to
the 3 that has now: []byte -unquote-> []byte -> string -> []byte.
In encode, please use a base64.Encoder to avoid the
copy into the temporary (and arbitrarily large) []byte buffer.
Russ
On 2011/02/17 17:58:08, rsc wrote: > Looks pretty good. If we're doing this to make ...
14 years, 2 months ago
(2011-02-18 11:24:22 UTC)
#3
On 2011/02/17 17:58:08, rsc wrote:
> Looks pretty good. If we're doing this to make []byte
> handling more efficient, let's clean up a few other
> inefficiencies here too.
>
> In decode, please add an unquoteBytes that returns []byte and
> make unquote a wrapper around unquoteBytes.
> In unquoteBytes, it would be nice to keep track of
> whether any changes need to be made and delay
> the allocation of the new []byte b until that is a certainty.
> In the common case unquoteBytes can return s itself.
> Then pick off []byte earlier in literal so that a base64
> decode has 0 incidental mallocs as compared to
> the 3 that has now: []byte -unquote-> []byte -> string -> []byte.
done
>
> In encode, please use a base64.Encoder to avoid the
> copy into the temporary (and arbitrarily large) []byte buffer.
in my tests using an Encoder directly was about 80% faster for
small buffers, and about 5% slower for large (50K+) buffers,
so i changed the code to choose the method depending on
the size of the buffer, but YMMV.
> ok. i'd wanted to keep the loop as a range, which is quite a ...
14 years, 2 months ago
(2011-02-23 15:51:38 UTC)
#6
> ok. i'd wanted to keep the loop as a range, which is quite a bit faster
> (20% on my machine), but it's worth avoiding the copy penalty for
> well-formed non-ascii.
the loop will get faster.
> i had tried this, and thought it looked better with the else, since the
> two pieces of code are roughly equal weight.
> but done anyway.
thanks. my rationale was to exit early on the
special case, leaving the general case as the
straight-line code.
Issue 4160058: code review 4160058: json: use base64 to encode []byte
(Closed)
Created 14 years, 2 months ago by rog
Modified 14 years, 2 months ago
Reviewers:
Base URL:
Comments: 6