I strongly object to adding more random scripts to the source tree. There are
already far too many unmaintained in scripts/auxiliar/ with no documentation at
all.
On 2020/04/25 17:07:17, hahnjo wrote:
> I strongly object to adding more random scripts to the source tree. There are
> already far too many unmaintained in scripts/auxiliar/ with no documentation
at
> all.
How about approaching this in a different manner then? Adding instructions to
the CG about how to benchmark LilyPond's behavior in a sensible manner? And if
the instructions end up bothersome to follow, back them up with scripts doing
the bulk of the work?
While I agree that adding more "use-me-if-you-manage-to-find-me" material is not
overly helpful, the basic idea for providing tools for a common task is
certainly not wrong. And there are contributors that are more comfortable on
starting their work with tasks where the main channel of feedback is at first
provided by computers, meaning that they don't get the feeling they are taxing
anybody's patience with getting feedback on their first steps.
On 2020/04/25 22:05:26, dak wrote:
> On 2020/04/25 17:07:17, hahnjo wrote:
> > I strongly object to adding more random scripts to the source tree. There
are
> > already far too many unmaintained in scripts/auxiliar/ with no documentation
> at
> > all.
>
> How about approaching this in a different manner then? Adding instructions to
> the CG about how to benchmark LilyPond's behavior in a sensible manner? And
if
> the instructions end up bothersome to follow, back them up with scripts doing
> the bulk of the work?
>
> While I agree that adding more "use-me-if-you-manage-to-find-me" material is
not
> overly helpful, the basic idea for providing tools for a common task is
> certainly not wrong. And there are contributors that are more comfortable on
> starting their work with tasks where the main channel of feedback is at first
> provided by computers, meaning that they don't get the feeling they are taxing
> anybody's patience with getting feedback on their first steps.
I can host this script somewhere else so it can be referenced in the CG,
but I don't think there optimizing our C++ code is a domain for beginners.
On 2020/04/25 22:05:26, dak wrote:
> On 2020/04/25 17:07:17, hahnjo wrote:
> > I strongly object to adding more random scripts to the source tree. There
are
> > already far too many unmaintained in scripts/auxiliar/ with no documentation
> at
> > all.
>
> How about approaching this in a different manner then? Adding instructions to
> the CG about how to benchmark LilyPond's behavior in a sensible manner? And
if
> the instructions end up bothersome to follow, back them up with scripts doing
> the bulk of the work?
I'd still argue that they will just rot over time. Have a look at
https://sourceforge.net/p/testlilyissues/issues/5665/ and the reasoning I
included why all of these features and scripts just didn't work anymore -
despite some being documented.
On 4/26/20, hanwenn@gmail.com <hanwenn@gmail.com> wrote:
> I can host this script somewhere else so it can be referenced in the CG,
> but I don't think there optimizing our C++ code is a domain for
> beginners.
I may be off the mark here, but what about adding your speed test into
the standard regression test suite? I know the regtests are already
measured for compilation time, but IIRC there’s no middle-size score
compilation such as the horn thing, separate from the rest,
specifically to get an inkling as to which changes in the codebase may
result in speed gains or unwanted slowdowns (not noticeable
otherwise). If this means adding 20s or so to `make check’, couldn’t
that be a bargain worth discussing?
Cheers,
-- V.
On Sun, Apr 26, 2020 at 10:04 PM Valentin Villenave
<valentin@villenave.net> wrote:
>
> On 4/26/20, hanwenn@gmail.com <hanwenn@gmail.com> wrote:
> > I can host this script somewhere else so it can be referenced in the CG,
> > but I don't think there optimizing our C++ code is a domain for
> > beginners.
>
> I may be off the mark here, but what about adding your speed test into
> the standard regression test suite? I know the regtests are already
> measured for compilation time, but IIRC there’s no middle-size score
> compilation such as the horn thing, separate from the rest,
> specifically to get an inkling as to which changes in the codebase may
> result in speed gains or unwanted slowdowns (not noticeable
> otherwise). If this means adding 20s or so to `make check’, couldn’t
> that be a bargain worth discussing?
It doesn't work.
1) You need to have a machine doing nothing else. Even web browsing in
the background will mess with shared L3 caches.
2) This always does comparisons. So you need to have a binary of the
previous version available as well. The current setup doesn't produce
that.
3) We used to have a CPU time field in our regtest (look for .profile)
files, but they were too noisy and were switched off.
--
Han-Wen Nienhuys - hanwenn@gmail.com - http://www.xs4all.nl/~hanwen
Issue 545950043: Add a script for running timing benchmarks
(Closed)
Created 4 years ago by hanwenn
Modified 4 years ago
Reviewers: hahnjo, dak, valentin_villenave.net
Base URL:
Comments: 0