diff options
| author | Marek Vasut <[email protected]> | 2022-07-12 19:41:29 +0200 |
|---|---|---|
| committer | Simon Glass <[email protected]> | 2022-07-26 02:30:56 -0600 |
| commit | 109dbdf042e2a034edd8ed7b711143c522cb1465 (patch) | |
| tree | 81007f0febba353c84cadd5471bd2de24ab8db63 | |
| parent | 54e89a8beb0edc6135586fed2a71139830d94974 (diff) | |
binman: Increase default fitImage data section resize step from 1k to 64k
Currently the fitImage data area is resized in 1 kiB steps. This works
when bundling smaller images below some 1 MiB, but when bundling large
images into the fitImage, this make binman spend extreme amount of time
and CPU just spinning in pylibfdt FdtSw.check_space() until the size
grows enough for the large image to fit into the data area. Increase
the default step to 64 kiB, which is a reasonable compromise -- the
U-Boot blobs are somewhere in the 64kiB...1MiB range, DT blob are just
short of 64 kiB, and so are the other blobs. This reduces binman runtime
with 32 MiB blob from 2.3 minutes to 5 seconds.
The following can be used to trigger the problem if rand.bin is some 32 MiB.
"
/ {
itb {
fit {
images {
test {
compression = "none";
description = "none";
type = "flat_dt";
blob {
filename = "rand.bin";
type = "blob-ext";
};
};
};
};
};
configurations {
binman_configuration: config {
loadables = "test";
};
};
};
"
Signed-off-by: Marek Vasut <[email protected]>
Cc: Alper Nebi Yasak <[email protected]>
Cc: Simon Glass <[email protected]>
Reviewed-by: Simon Glass <[email protected]>
| -rw-r--r-- | tools/binman/etype/fit.py | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 12306623af6..ad43fce18ec 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -658,6 +658,7 @@ class Entry_fit(Entry_section): # Build a new tree with all nodes and properties starting from the # entry node fsw = libfdt.FdtSw() + fsw.INC_SIZE = 65536 fsw.finish_reservemap() to_remove = [] loadables = [] |
