|
Reduce memory use of lilypond-book
The snippet extraction was using re.search(string[start:end]). This
created many copies of the input string, such that processing a 12M
would use over 6G of memory.
Instead, use regex.search(string, start, end) which avoids the
string slicing.
Do some general cleanups while we're at it:
* move find_toplevel_snippets to book_base
* add unittest for find_toplevel_snippets
* remove wildcard imports
* add some doc strings
Total comments: 1
|
Unified diffs |
Side-by-side diffs |
Delta from patch set |
Stats (+283 lines, -224 lines) |
Patch |
 |
M |
python/GNUmakefile
|
View
|
|
1 chunk |
+3 lines, -0 lines |
0 comments
|
Download
|
 |
M |
python/book_base.py
|
View
|
1
|
8 chunks |
+106 lines, -6 lines |
0 comments
|
Download
|
 |
A |
python/book_base_test.py
|
View
|
1
2
|
1 chunk |
+51 lines, -0 lines |
0 comments
|
Download
|
 |
M |
python/book_docbook.py
|
View
|
|
6 chunks |
+18 lines, -17 lines |
0 comments
|
Download
|
 |
M |
python/book_html.py
|
View
|
|
4 chunks |
+26 lines, -30 lines |
0 comments
|
Download
|
 |
M |
python/book_latex.py
|
View
|
|
7 chunks |
+21 lines, -28 lines |
0 comments
|
Download
|
 |
M |
python/book_snippets.py
|
View
|
|
6 chunks |
+13 lines, -11 lines |
0 comments
|
Download
|
 |
M |
python/book_texinfo.py
|
View
|
1
|
15 chunks |
+34 lines, -32 lines |
0 comments
|
Download
|
 |
M |
scripts/lilypond-book.py
|
View
|
|
9 chunks |
+11 lines, -100 lines |
0 comments
|
Download
|
Total messages: 6
|