|
|
Created:
5 years ago by hanwenn Modified:
4 years, 11 months ago CC:
lilypond-devel_gnu.org Visibility:
Public. |
DescriptionAddress output-distance problems:
* Run output-distance.py from srcdir
* Fix <meta charset=".."> tag
* Generate self-test HTML in out/
* Remove test files afterwards
Patch Set 1 #Patch Set 2 : local-test #
Total comments: 1
Patch Set 3 : meta #Patch Set 4 : disable test #Patch Set 5 : disable test (try 2) #
MessagesTotal messages: 35
local-test
Sign in to reply to this message.
Looks mostly good to me, but I don't understand the change for <meta>. I'd propose to push only the changes required to restore 'make check' because it's blocking James from testing patches. https://codereview.appspot.com/563730043/diff/577660046/scripts/build/output-... File scripts/build/output-distance.py (right): https://codereview.appspot.com/563730043/diff/577660046/scripts/build/output-... scripts/build/output-distance.py:834: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> I don't understand this change, see https://stackoverflow.com/a/4696517/10606944
Sign in to reply to this message.
On 2020/03/11 12:15:54, hahnjo wrote: > Looks mostly good to me, but I don't understand the change for <meta>. I'd > propose to push only the changes required to restore 'make check' because it's > blocking James from testing patches. > > https://codereview.appspot.com/563730043/diff/577660046/scripts/build/output-... > File scripts/build/output-distance.py (right): > > https://codereview.appspot.com/563730043/diff/577660046/scripts/build/output-... > scripts/build/output-distance.py:834: <meta http-equiv="Content-Type" > content="text/html; charset=utf-8"> > I don't understand this change, see https://stackoverflow.com/a/4696517/10606944 this what Tidy on Ubuntu Xenial complained of. However, I couldn't get it to shut up completely, and removing the self-test output also works.
Sign in to reply to this message.
LGTM (It might be a good idea to suppress the output of the test run - I was seriously confused that output-distance was outputting differences before even running the regression tests. But that's for a future change.)
Sign in to reply to this message.
going to fast-track this so the testing can continue.
Sign in to reply to this message.
hanwenn@gmail.com writes: > going to fast-track this so the testing can continue. > > https://codereview.appspot.com/563730043/ Patchy refuses. Staging is blocked. Since there is no point in admitting a patch that will stop master from building, I am removing from staging. I will retry to make very sure that nothing I did at the time on my computer is at fault, though. 23:27:18 (UTC) Begin LilyPond compile, previous commit at 93c179860f0edf55722e7157ef2024c30b33a47d 23:27:22 Merged staging, now at: 93c179860f0edf55722e7157ef2024c30b33a47d 23:27:23 Success: ./autogen.sh --noconfigure 23:27:35 Success: /tmp/lilypond-autobuild/configure --enable-checking 23:27:38 Success: nice make clean 23:31:42 Success: nice make -j9 CPU_COUNT=9 23:31:45 *** FAILED BUILD *** nice make test -j9 CPU_COUNT=9 Previous good commit: ee197383f4af552ed433c496617cb5ffe2a28dcf Current broken commit: 93c179860f0edf55722e7157ef2024c30b33a47d 23:31:45 *** FAILED STEP *** merge from staging Failed runner: nice make test -j9 CPU_COUNT=9 See the log file log-staging-nice-make-test--j9-CPU_COUNT=9.txt 23:31:45 Traceback (most recent call last): File "/usr/local/tmp/lilypond-extra/patches/compile_lilypond_test/__init__.py", line 528, in handle_staging self.build (issue_id=issue_id) File "/usr/local/tmp/lilypond-extra/patches/compile_lilypond_test/__init__.py", line 328, in build issue_id) File "/usr/local/tmp/lilypond-extra/patches/compile_lilypond_test/__init__.py", line 266, in runner raise FailedCommand ("Failed runner: %s\nSee the log file %s" % (command, this_logfilename)) FailedCommand: Failed runner: nice make test -j9 CPU_COUNT=9 See the log file log-staging-nice-make-test--j9-CPU_COUNT=9.txt . ---------------------------------------------------------------------- Ran 1 test in 0.003s OK GNU LilyPond 2.21.0 cp: cannot stat '19.sub{-*.signature,.ly,-1.eps,.log,.profile}': No such file or directory test results in ./out/test-output-distance Traceback (most recent call last): File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1561, in <module> main () File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1546, in main run_tests () File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1495, in run_tests test_compare_tree_pairs () File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1330, in test_compare_tree_pairs system ('cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} dir1/subdir/') File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1304, in system assert stat == 0, (stat, x) AssertionError: (256, 'cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} dir1/subdir/') make[1]: *** [/tmp/lilypond-autobuild/./scripts/build/GNUmakefile:19: local-test] Error 1 make: *** [/tmp/lilypond-autobuild/GNUmakefile.in:328: test] Error 2 -- David Kastrup
Sign in to reply to this message.
Easiest fix is probably to change disable the test for output-distance, by backing out the change GNUmakefile.in I'm curious about the exact procedure for building here. I've tested this and previous patch in various configurations. On Thu, Mar 12, 2020 at 12:49 AM David Kastrup <dak@gnu.org> wrote: > > hanwenn@gmail.com writes: > > > going to fast-track this so the testing can continue. > > > > https://codereview.appspot.com/563730043/ > > Patchy refuses. Staging is blocked. Since there is no point in > admitting a patch that will stop master from building, I am removing > from staging. I will retry to make very sure that nothing I did at the > time on my computer is at fault, though. > > 23:27:18 (UTC) Begin LilyPond compile, previous commit at 93c179860f0edf55722e7157ef2024c30b33a47d > 23:27:22 Merged staging, now at: 93c179860f0edf55722e7157ef2024c30b33a47d > 23:27:23 Success: ./autogen.sh --noconfigure > 23:27:35 Success: /tmp/lilypond-autobuild/configure --enable-checking > 23:27:38 Success: nice make clean > 23:31:42 Success: nice make -j9 CPU_COUNT=9 > 23:31:45 *** FAILED BUILD *** > nice make test -j9 CPU_COUNT=9 > Previous good commit: ee197383f4af552ed433c496617cb5ffe2a28dcf > Current broken commit: 93c179860f0edf55722e7157ef2024c30b33a47d > 23:31:45 *** FAILED STEP *** > merge from staging > Failed runner: nice make test -j9 CPU_COUNT=9 > See the log file log-staging-nice-make-test--j9-CPU_COUNT=9.txt > 23:31:45 Traceback (most recent call last): > File "/usr/local/tmp/lilypond-extra/patches/compile_lilypond_test/__init__.py", line 528, in handle_staging > self.build (issue_id=issue_id) > File "/usr/local/tmp/lilypond-extra/patches/compile_lilypond_test/__init__.py", line 328, in build > issue_id) > File "/usr/local/tmp/lilypond-extra/patches/compile_lilypond_test/__init__.py", line 266, in runner > raise FailedCommand ("Failed runner: %s\nSee the log file %s" % (command, this_logfilename)) > FailedCommand: Failed runner: nice make test -j9 CPU_COUNT=9 > See the log file log-staging-nice-make-test--j9-CPU_COUNT=9.txt > > . > ---------------------------------------------------------------------- > Ran 1 test in 0.003s > > OK > GNU LilyPond 2.21.0 > cp: cannot stat '19.sub{-*.signature,.ly,-1.eps,.log,.profile}': No such file or directory > test results in ./out/test-output-distance > Traceback (most recent call last): > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1561, in <module> > main () > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1546, in main > run_tests () > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1495, in run_tests > test_compare_tree_pairs () > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1330, in test_compare_tree_pairs > system ('cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} dir1/subdir/') > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1304, in system > assert stat == 0, (stat, x) > AssertionError: (256, 'cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} dir1/subdir/') > make[1]: *** [/tmp/lilypond-autobuild/./scripts/build/GNUmakefile:19: local-test] Error 1 > make: *** [/tmp/lilypond-autobuild/GNUmakefile.in:328: test] Error 2 > > > -- > David Kastrup -- Han-Wen Nienhuys - hanwenn@gmail.com - http://www.xs4all.nl/~hanwen
Sign in to reply to this message.
On 2020/03/11 23:49:23, dak wrote: > [...] > GNU LilyPond 2.21.0 > cp: cannot stat '19.sub{-*.signature,.ly,-1.eps,.log,.profile}': No such file or > directory > test results in ./out/test-output-distance > Traceback (most recent call last): > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1561, in > <module> > main () > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1546, in > main > run_tests () > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1495, in > run_tests > test_compare_tree_pairs () > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1330, in > test_compare_tree_pairs > system ('cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} dir1/subdir/') > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1304, in > system > assert stat == 0, (stat, x) > AssertionError: (256, 'cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} > dir1/subdir/') > make[1]: *** [/tmp/lilypond-autobuild/./scripts/build/GNUmakefile:19: > local-test] Error 1 > make: *** [/tmp/lilypond-autobuild/GNUmakefile.in:328: test] Error 2 This looks like bash-ism which might explain why it works for Han-Wen and me. I agree with him that disabling the local-test invocation in GNUmakefile.in is probably the easiest solution for now. These tests haven't run for years, so we'll definitely be fine without them for a few more days.
Sign in to reply to this message.
disable test
Sign in to reply to this message.
disable test (try 2)
Sign in to reply to this message.
On 2020/03/12 08:01:03, hahnjo wrote: > On 2020/03/11 23:49:23, dak wrote: > > [...] > > GNU LilyPond 2.21.0 > > cp: cannot stat '19.sub{-*.signature,.ly,-1.eps,.log,.profile}': No such file > or > > directory > > test results in ./out/test-output-distance > > Traceback (most recent call last): > > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1561, > in > > <module> > > main () > > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1546, > in > > main > > run_tests () > > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1495, > in > > run_tests > > test_compare_tree_pairs () > > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1330, > in > > test_compare_tree_pairs > > system ('cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} dir1/subdir/') > > File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", line 1304, > in > > system > > assert stat == 0, (stat, x) > > AssertionError: (256, 'cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} > > dir1/subdir/') > > make[1]: *** [/tmp/lilypond-autobuild/./scripts/build/GNUmakefile:19: > > local-test] Error 1 > > make: *** [/tmp/lilypond-autobuild/GNUmakefile.in:328: test] Error 2 > > This looks like bash-ism which might explain why it works for Han-Wen and me. I > agree with him that disabling the local-test invocation in GNUmakefile.in is > probably the easiest solution for now. These tests haven't run for years, so > we'll definitely be fine without them for a few more days. dak@lola:/usr/local/tmp/lilypond$ dash $ echo {1,2,3} {1,2,3} $ Ah yes. Since /bin/sh defaults to dash on Ubuntu (or doesn't it any more?), I wonder how this escaped testing.
Sign in to reply to this message.
On 2020/03/12 09:22:09, dak wrote: > On 2020/03/12 08:01:03, hahnjo wrote: > > This looks like bash-ism which might explain why it works for Han-Wen and me. > I > > agree with him that disabling the local-test invocation in GNUmakefile.in is > > probably the easiest solution for now. These tests haven't run for years, so > > we'll definitely be fine without them for a few more days. > > dak@lola:/usr/local/tmp/lilypond$ dash > $ echo {1,2,3} > {1,2,3} > $ > > Ah yes. Since /bin/sh defaults to dash on Ubuntu (or doesn't it any more?), I > wonder how this escaped testing. It wasn't tested, that's the point: The initial patch only received a 'test-baseline' and no 'check' which didn't trigger the python tests. 'patchy-staging' only runs 'test' as far as I understand, so that patch landed in master. Now this patch adds it to 'test' which means it's the first time somebody runs it on Ubuntu. I'm for disabling it again until it receives sufficient testing. Either in current master removing 'local-check' from 'scripts/build/GNUmakefile' or taking the updated patchset from here without the 'local-test' recursion in GNUmakefile.in
Sign in to reply to this message.
Hello What exactly am I supposed to be testing? With or without make check? I am struggling with all this 'back and forth' and with patches getting created and tested by different people (worksforme, doesn't work for me etc.). Could someone put something in the tracker to know what I am to expect? Thanks. James On 12/03/2020 09:22, dak@gnu.org wrote: > On 2020/03/12 08:01:03, hahnjo wrote: >> On 2020/03/11 23:49:23, dak wrote: >>> [...] >>> GNU LilyPond 2.21.0 >>> cp: cannot stat '19.sub{-*.signature,.ly,-1.eps,.log,.profile}': No > such file >> or >>> directory >>> test results in ./out/test-output-distance >>> Traceback (most recent call last): >>> File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", > line 1561, >> in >>> <module> >>> main () >>> File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", > line 1546, >> in >>> main >>> run_tests () >>> File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", > line 1495, >> in >>> run_tests >>> test_compare_tree_pairs () >>> File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", > line 1330, >> in >>> test_compare_tree_pairs >>> system ('cp 19.sub{-*.signature,.ly,-1.eps,.log,.profile} > dir1/subdir/') >>> File "/tmp/lilypond-autobuild/scripts/build/output-distance.py", > line 1304, >> in >>> system >>> assert stat == 0, (stat, x) >>> AssertionError: (256, 'cp > 19.sub{-*.signature,.ly,-1.eps,.log,.profile} >>> dir1/subdir/') >>> make[1]: *** > [/tmp/lilypond-autobuild/./scripts/build/GNUmakefile:19: >>> local-test] Error 1 >>> make: *** [/tmp/lilypond-autobuild/GNUmakefile.in:328: test] Error 2 >> This looks like bash-ism which might explain why it works for Han-Wen > and me. I >> agree with him that disabling the local-test invocation in > GNUmakefile.in is >> probably the easiest solution for now. These tests haven't run for > years, so >> we'll definitely be fine without them for a few more days. > dak@lola:/usr/local/tmp/lilypond$ dash > $ echo {1,2,3} > {1,2,3} > $ > > Ah yes. Since /bin/sh defaults to dash on Ubuntu (or doesn't it any > more?), I wonder how this escaped testing. > > > > https://codereview.appspot.com/563730043/ >
Sign in to reply to this message.
On 2020/03/12 09:33:43, hahnjo wrote: > On 2020/03/12 09:22:09, dak wrote: > > On 2020/03/12 08:01:03, hahnjo wrote: > > > This looks like bash-ism which might explain why it works for Han-Wen and > me. > > I > > > agree with him that disabling the local-test invocation in GNUmakefile.in is > > > probably the easiest solution for now. These tests haven't run for years, so > > > we'll definitely be fine without them for a few more days. > > > > dak@lola:/usr/local/tmp/lilypond$ dash > > $ echo {1,2,3} > > {1,2,3} > > $ > > > > Ah yes. Since /bin/sh defaults to dash on Ubuntu (or doesn't it any more?), I > > wonder how this escaped testing. > > It wasn't tested, that's the point: The initial patch only received a > 'test-baseline' and no 'check' which didn't trigger the python tests. > 'patchy-staging' only runs 'test' as far as I understand, so that patch landed > in master. Now this patch adds it to 'test' which means it's the first time > somebody runs it on Ubuntu. > I'm for disabling it again until it receives sufficient testing. Either in > current master removing 'local-check' from 'scripts/build/GNUmakefile' or taking > the updated patchset from here without the 'local-test' recursion in > GNUmakefile.in To be clear: I'm not blaming anyone, least of all James; 'test-baseline' really was the best way possible to test the initial patch before Han-Wen added compatibility code. IMO this just happens to be a bad coincidence of two problems: The test not running from 'make test', but only 'make check'; and the test not working on Ubuntu at all.
Sign in to reply to this message.
On 2020/03/12 09:52:31, hahnjo wrote: > On 2020/03/12 09:33:43, hahnjo wrote: > > On 2020/03/12 09:22:09, dak wrote: > > > On 2020/03/12 08:01:03, hahnjo wrote: > > > > This looks like bash-ism which might explain why it works for Han-Wen and > > me. > > > I > > > > agree with him that disabling the local-test invocation in GNUmakefile.in > is > > > > probably the easiest solution for now. These tests haven't run for years, > so > > > > we'll definitely be fine without them for a few more days. > > > > > > dak@lola:/usr/local/tmp/lilypond$ dash > > > $ echo {1,2,3} > > > {1,2,3} > > > $ > > > > > > Ah yes. Since /bin/sh defaults to dash on Ubuntu (or doesn't it any more?), > I > > > wonder how this escaped testing. > > > > It wasn't tested, that's the point: The initial patch only received a > > 'test-baseline' and no 'check' which didn't trigger the python tests. > > 'patchy-staging' only runs 'test' as far as I understand, so that patch landed > > in master. Now this patch adds it to 'test' which means it's the first time > > somebody runs it on Ubuntu. > > I'm for disabling it again until it receives sufficient testing. Either in > > current master removing 'local-check' from 'scripts/build/GNUmakefile' or > taking > > the updated patchset from here without the 'local-test' recursion in > > GNUmakefile.in > > To be clear: I'm not blaming anyone, least of all James; 'test-baseline' really > was the best way possible to test the initial patch before Han-Wen added > compatibility code. IMO this just happens to be a bad coincidence of two > problems: The test not running from 'make test', but only 'make check'; and the > test not working on Ubuntu at all. Well, there is not a whole lot to be gained from blaming anybody anyway when there was no damage (not that the blame game makes a lot of sense when there is damage). Patch needs work (whether it contains a problem itself or triggers a preexisting one that needs to be fixed in order for the patch to go ahead), but it was caught before the problem affected everyone.
Sign in to reply to this message.
On 2020/03/12 10:03:22, dak wrote: > Patch needs work (whether it contains a problem itself or triggers a > preexisting one that needs to be fixed in order for the patch to go ahead), but > it was caught before the problem affected everyone. I think it already affects everyone: Current master fails 'make check' if you're building out-of-tree. That's a stopper for testing new patches. So we have to decide now to either a) give this updated patch a try with patchy or b) just disable the test by removing 'local-check' in scripts/build/GNUmakefile. If it's still broken when I come home, I'll do b).
Sign in to reply to this message.
On Thu, Mar 12, 2020 at 10:37 AM <pkx166h@posteo.net> wrote: > > Hello > > What exactly am I supposed to be testing? > > With or without make check? > > I am struggling with all this 'back and forth' and with patches getting > created and tested by different people (worksforme, doesn't work for me > etc.). This is exactly why I have been advocating tests based on docker containers, so we have a common understanding of when something passes tests and when not. -- Han-Wen Nienhuys - hanwenn@gmail.com - http://www.xs4all.nl/~hanwen
Sign in to reply to this message.
Han-Wen Nienhuys <hanwenn@gmail.com> writes: > On Thu, Mar 12, 2020 at 10:37 AM <pkx166h@posteo.net> wrote: >> >> Hello >> >> What exactly am I supposed to be testing? >> >> With or without make check? >> >> I am struggling with all this 'back and forth' and with patches getting >> created and tested by different people (worksforme, doesn't work for me >> etc.). > > This is exactly why I have been advocating tests based on docker > containers, so we have a common understanding of when something passes > tests and when not. You'll find that a docker container also needs instructions of what exactly to test for. -- David Kastrup
Sign in to reply to this message.
On 12/03/2020 10:32, Han-Wen Nienhuys wrote: > On Thu, Mar 12, 2020 at 10:37 AM <pkx166h@posteo.net> wrote: >> Hello >> >> What exactly am I supposed to be testing? >> >> With or without make check? >> >> I am struggling with all this 'back and forth' and with patches getting >> created and tested by different people (worksforme, doesn't work for me >> etc.). > This is exactly why I have been advocating tests based on docker > containers, so we have a common understanding of when something passes > tests and when not. Actually, with all due respect, this is neither here nor there is it? I had said that tests for this (or other issues) had failed make check and was told - one case by yourself - that this was expected and not to run the make check. So this is why I put '...make test-baseline..' in my 'passes ...' note in the tracker. I think Jonas queried it once (which led to me learning how to make my set of tests more robust for build file patches). I don't hava a problem following instructions, and you developers make the final decisisons, so I have to assume there is a good reason I am told to ignore a test. Would docker give us this 'proverbial canary' or would it turn into 'worksforme' when someone tried to build their own version of LP on a vanilla base of Linux ;) James
Sign in to reply to this message.
> > > Would docker give us this 'proverbial canary' or would it turn into > 'worksforme' when someone tried to build their own version of LP on a > vanilla base of Linux? > Docker would eliminate 'worksforme' type issues yes. >
Sign in to reply to this message.
On 2020/03/12 10:10:23, hahnjo wrote: > On 2020/03/12 10:03:22, dak wrote: > > Patch needs work (whether it contains a problem itself or triggers a > > preexisting one that needs to be fixed in order for the patch to go ahead), > but > > it was caught before the problem affected everyone. > > I think it already affects everyone: Current master fails 'make check' if you're > building out-of-tree. That's a stopper for testing new patches. So we have to > decide now to either a) give this updated patch a try with patchy or b) just > disable the test by removing 'local-check' in scripts/build/GNUmakefile. If it's > still broken when I come home, I'll do b). Pushed to staging: commit 92b75c19c78b426d453c1e8ec7cda39a0d552fb3 Author: Jonas Hahnfeld <hahnjo@hahnjo.de> Date: Thu Mar 12 13:35:28 2020 +0100 Deactivate self-tests of output-distance.py This doesn't work for out-of-tree builds because the script is not copied over. Furthermore the test is broken on Ubuntu with dash due to bash-isms in the system() commands. diff --git a/scripts/build/GNUmakefile b/scripts/build/GNUmakefile index d406b38b59..8cdd6c22d7 100644 --- a/scripts/build/GNUmakefile +++ b/scripts/build/GNUmakefile @@ -14,6 +14,3 @@ include $(depth)/make/stepmake.make #INSTALLATION_OUT_FILES1=$(outdir)/lilypond-login $(outdir)/lilypond-profile all: $(INSTALLATION_FILES) - -local-check: - $(PYTHON) output-distance.py --test
Sign in to reply to this message.
Am Donnerstag, den 12.03.2020, 11:32 +0100 schrieb Han-Wen Nienhuys: > On Thu, Mar 12, 2020 at 10:37 AM < > pkx166h@posteo.net > > wrote: > > Hello > > > > What exactly am I supposed to be testing? > > > > With or without make check? > > > > I am struggling with all this 'back and forth' and with patches getting > > created and tested by different people (worksforme, doesn't work for me > > etc.). > > This is exactly why I have been advocating tests based on docker > containers, so we have a common understanding of when something passes > tests and when not. In this case, it would have been strictly worse: It would have passed patchy-staging and master would still be broken for native Ubuntu. Unless you propose to test all possible setups in Docker which is impossible, I'd say. Jonas
Sign in to reply to this message.
On 12/03/2020 12:36, Kevin Barry wrote: > > > Would docker give us this 'proverbial canary' or would it turn into > 'worksforme' when someone tried to build their own version of LP > on a > vanilla base of Linux? > > > Docker would eliminate 'worksforme' type issues yes And yet ... isn't this exactly what happend? It worked for Han-wen but not for me. So who is right? I'll defer you to Jonas' reply to this thread just after yours. I'm all for conistent build envs but at least make sure your testing is actually ... err testing what it should be testing. Containers don't protect against that. James
Sign in to reply to this message.
On Thu, 12 Mar 2020 at 12:48, <pkx166h@posteo.net> wrote: > I'll defer you to Jonas' reply to this thread just after yours. > > I'm all for conistent build envs but at least make sure your testing is actually ... err testing what it should be testing. > > Containers don't protect against that. A docker container is the same everywhere; the underlying distribution or other differences in people's setups don't make any difference. So if there was an agreed dockerfile that would simplify discussions about 'worksforme'. Jonas's reply referred to native Ubuntu, which misses Han-Wen's point I think. The container provides a consistent and portable environment (providing you have docker installed, it's basically a single text file). If we had some kind of official docker/container image/file for these tests, then if something passes there, but not on someone's personalised distribution then it would be on that person to figure it out (or just use the docker container). And developers would have a single target for testing. Kevin
Sign in to reply to this message.
Kevin Barry <barrykp@gmail.com> writes: > On Thu, 12 Mar 2020 at 12:48, <pkx166h@posteo.net> wrote: >> I'll defer you to Jonas' reply to this thread just after yours. >> >> I'm all for conistent build envs but at least make sure your testing >> is actually ... err testing what it should be testing. >> >> Containers don't protect against that. > > A docker container is the same everywhere; the underlying distribution > or other differences in people's setups don't make any difference. So > if there was an agreed dockerfile that would simplify discussions > about 'worksforme'. Frankly, I am more sympathetic to "worksforme" discussions among developers than telling users "worksforme". Where is the point in being able to tell users that no developer will reproduce their problem? I'd rather have an error popping up for at least some developers than for none. -- David Kastrup My replies have a tendency to cause friction. To help mitigating damage, feel free to forward problematic posts to me adding a subject like "timeout 1d" (for a suggested timeout of 1 day) or "offensive".
Sign in to reply to this message.
> > > Frankly, I am more sympathetic to "worksforme" discussions among > developers than telling users "worksforme". Where is the point in being > able to tell users that no developer will reproduce their problem? > > I'd rather have an error popping up for at least some developers than > for none. > This sounds like you are saying it's better for the situation to be a mess for developers so that they can better help users deal with the same mess, therefore we should leave things as they are. Installing docker and building an image is much easier than setting up a working build environment for LilyPond now. I think it would be a win for both devs and users. Kevin
Sign in to reply to this message.
Kevin Barry <barrykp@gmail.com> writes: >> >> >> Frankly, I am more sympathetic to "worksforme" discussions among >> developers than telling users "worksforme". Where is the point in being >> able to tell users that no developer will reproduce their problem? >> >> I'd rather have an error popping up for at least some developers than >> for none. >> > > This sounds like you are saying it's better for the situation to be a mess > for developers so that they can better help users deal with the same mess, > therefore we should leave things as they are. I say that having a developer monoculture doesn't buy as anything since we still need to provide for a multitude of users. > Installing docker and building an image is much easier than setting up > a working build environment for LilyPond now. Get a LilyPond source .deb and do sudo apt build-deps on it. Afterwards you have a working build environment. > I think it would be a win for both devs and users. I don't really see the underlying logic. Users should consider it a win when the developers state "you are no longer allowed to run LilyPond natively, get a docker container", and you want to convince developers to stop using and developing LilyPond natively on their systems because it will be so much easier to maintain a virtual layer in between? We have had the LilyDev VM for a long time now. It has seen some use, but not overwhelmingly much, and the reasons for that are pretty much the same for newer virtualisation methods. -- David Kastrup
Sign in to reply to this message.
> > > I say that having a developer monoculture doesn't buy as anything since > we still need to provide for a multitude of users. > We are talking about testing builds right? If a user gets as far as "I need to test changes I made to the source code" then surely it would be better to have something to point them to than to say "let's see if one of the devs ever ran into the same problem you are having". It also means we could have some confidence when figuring out if a problem is environmental or not. Having some kind of official dockerfile isn't pushing a monoculture: it actually makes it easier for people to run whatever OS they want and not have to keep it in line with LilyPond build requirements. It would make building on Windows or MacOS easier since there are prepackaged docker apps for both. > > Installing docker and building an image is much easier than setting up > > a working build environment for LilyPond now. > > Get a LilyPond source .deb and do sudo apt build-deps on it. Afterwards > you have a working build environment. > That would not work on my system. Making everyone use .deb is another kind of monoculture. > I don't really see the underlying logic. Users should consider it a win > when the developers state "you are no longer allowed to run LilyPond > natively, get a docker container", and you want to convince developers > to stop using and developing LilyPond natively on their systems because > it will be so much easier to maintain a virtual layer in between? > I wouldn't ever suggest that we make running LilyPond require docker. I thought this discussion was about testing builds. > We have had the LilyDev VM for a long time now. It has seen some use, > but not overwhelmingly much, and the reasons for that are pretty much > the same for newer virtualisation methods. > I disagree. The docker image specification could be a simple text file that is kept in the LilyPond repo, and people can build it if they want. Tests could build it and use that image for testing, etc, etc. Kevin >
Sign in to reply to this message.
On Mar 12, 2020, at 08:36, Kevin Barry <barrykp@gmail.com> wrote: > >> Would docker give us this 'proverbial canary' or would it turn into >> 'worksforme' when someone tried to build their own version of LP on a >> vanilla base of Linux? >> > Docker would eliminate 'worksforme' type issues yes. The direction of this statement is correct, but the magnitude is not. The kernel is still provided by the host. Getting a crash report can be frustrating when the guest's behavior hinges on /proc features that the host OS has configured appropriately for the host, not the guest. Configurable security restrictions can make the debugging experience different from one installation to another. Et cetera. — Dan
Sign in to reply to this message.
> > > The direction of this statement is correct, but the magnitude is not. The > kernel is still provided by the host. Getting a crash report can be > frustrating when the guest's behavior hinges on /proc features that the > host OS has configured appropriately for the host, not the guest. > Configurable security restrictions can make the debugging experience > different from one installation to another. Et cetera. > Yes it's true that containers are not completely safe from host configurations, but I didn't think talking about the 1% would help this discussion. If you think it makes pursuing this idea a waste of time then fair enough. David K doesn't like it either so I think it's time to let it go. >
Sign in to reply to this message.
On Thu, Mar 12, 2020 at 6:17 PM David Kastrup <dak@gnu.org> wrote: > > Kevin Barry <barrykp@gmail.com> writes: > > >> > >> > >> Frankly, I am more sympathetic to "worksforme" discussions among > >> developers than telling users "worksforme". Where is the point in being > >> able to tell users that no developer will reproduce their problem? > >> > >> I'd rather have an error popping up for at least some developers than > >> for none. > >> > > > > This sounds like you are saying it's better for the situation to be a mess > > for developers so that they can better help users deal with the same mess, > > therefore we should leave things as they are. > > I say that having a developer monoculture doesn't buy as anything since > we still need to provide for a multitude of users. Much to the contrary. With Docker, one can test on a multitude of platforms. For example, with https://github.com/hanwen/lilypond-ci I can easily test patches from Rietveld, Github and my local machine against Ubuntu Xenial, Fedora and Fedore with Guile 2. This didn't catch the problem with dash vs bash, but I can adapt one of the images to install dash rather than bash as the shell. I also haven't done separate build directories, but will do so shortly. This means that individual developers can test changes that are risky (version upgrades, build system changes) more widely, and can push them with more assurance. I encourage you to try it out, and give some feedback. -- Han-Wen Nienhuys - hanwenn@gmail.com - http://www.xs4all.nl/~hanwen
Sign in to reply to this message.
Kevin Barry <barrykp@gmail.com> writes: >> >> >> The direction of this statement is correct, but the magnitude is not. The >> kernel is still provided by the host. Getting a crash report can be >> frustrating when the guest's behavior hinges on /proc features that the >> host OS has configured appropriately for the host, not the guest. >> Configurable security restrictions can make the debugging experience >> different from one installation to another. Et cetera. >> > > Yes it's true that containers are not completely safe from host > configurations, but I didn't think talking about the 1% would help this > discussion. If you think it makes pursuing this idea a waste of time then > fair enough. David K doesn't like it either so I think it's time to let it > go. "is not convinced of various of the benefits this is touted with" is not the same as "doesn't like it". At the current point I don't see the promised net wins that significantly depend on an abundance of processing time being available either from volunteers or for pay. The cost would become less if most of the test containers would not do documentation builds. However, where multiple versions of Ghostscript are likely to cause trouble, that kind of restriction would not seem to be optimal either. So to end up a net win even given the restriction that we retain the processing time of volunteers (since we are basically bound to blow the free tiers of anything else with serious testing), our test setup would need to get seriously more fine-grained. -- David Kastrup
Sign in to reply to this message.
On Mar 13, 2020, at 04:43, Kevin Barry <barrykp@gmail.com> wrote: > > The direction of this statement is correct, but the magnitude is not. The kernel is still provided by the host. Getting a crash report can be frustrating when the guest's behavior hinges on /proc features that the host OS has configured appropriately for the host, not the guest. Configurable security restrictions can make the debugging experience different from one installation to another. Et cetera. > > Yes it's true that containers are not completely safe from host configurations, but I didn't think talking about the 1% would help this discussion. If you think it makes pursuing this idea a waste of time then fair enough. I think you misread me. I was agreeing that relying on containers greatly reduces "works for me" problems among the group using them; but I thought that in this forum, a good number of people would take "eliminate" literally and would therefore be thankful for some clarification. Regards, — Dan
Sign in to reply to this message.
commit e325a23887fd93e56da2a13dd59a8b82a8ce74a0 Author: Han-Wen Nienhuys <hanwen@lilypond.org> Date: Wed Mar 11 20:58:46 2020 +0100 Address output-distance problems: * Run output-distance.py from srcdir * Generate self-test HTML in out/
Sign in to reply to this message.
|