diff options
| author | Tom Rini <[email protected]> | 2022-01-27 14:14:47 -0500 |
|---|---|---|
| committer | Tom Rini <[email protected]> | 2022-01-27 14:14:47 -0500 |
| commit | 761a1786e125ce0e2f472f25f2b841d5f4e2f0cb (patch) | |
| tree | 39f5349498ded95ab427bbde2dd8ff2f633303de /tools | |
| parent | 9a1dd6dcfefc56c05ee7f7249faaa97c5f937fbc (diff) | |
| parent | 2d2384bbaff0ab84c868b553c74048a5f6acc9e3 (diff) | |
Merge tag 'dm-pull-26jan22' of https://source.denx.de/u-boot/custodians/u-boot-dm
acpi refactoring to allow non-x86 use
binman support for bintools (binary tools)
minor tools improvements in preparation for FDT signing
various minor fixes and improvements
Diffstat (limited to 'tools')
43 files changed, 2988 insertions, 444 deletions
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 3e063d1f86c..ab5a5e06b15 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -185,14 +185,37 @@ Binman is intended to replace all of this, with ifdtool left to handle only the configuration of the Intel-format descriptor. -Running binman --------------- +Installing binman +----------------- First install prerequisites, e.g:: sudo apt-get install python-pyelftools python3-pyelftools lzma-alone \ liblz4-tool +You can run binman directly if you put it on your PATH. But if you want to +install into your `~/.local` Python directory, use:: + + pip install tools/patman tools/dtoc tools/binman + +Note that binman makes use of libraries from patman and dtoc, which is why these +need to be installed. Also you need `libfdt` and `pylibfdt` which can be +installed like this:: + + git clone git://git.kernel.org/pub/scm/utils/dtc/dtc.git + cd dtc + pip install . + make NO_PYTHON=1 install + +This installs the `libfdt.so` library into `~/lib` so you can use +`LD_LIBRARY_PATH=~/lib` when running binman. If you want to install it in the +system-library directory, replace the last line with:: + + make NO_PYTHON=1 PREFIX=/ install + +Running binman +-------------- + Type:: binman build -b <board_name> @@ -707,7 +730,7 @@ The above feature ensures that the devicetree is clearly separated from the U-Boot executable and can be updated separately by binman as needed. It can be disabled with the --no-expanded flag if required. -The same applies for u-boot-spl and u-boot-spl. In those cases, the expansion +The same applies for u-boot-spl and u-boot-tpl. In those cases, the expansion includes the BSS padding, so for example:: spl { @@ -1004,6 +1027,77 @@ by increasing the -v/--verbosity from the default of 1: You can use BINMAN_VERBOSE=5 (for example) when building to select this. +Bintools +======== + +`Bintool` is the name binman gives to a binary tool which it uses to create and +manipulate binaries that binman cannot handle itself. Bintools are often +necessary since Binman only supports a subset of the available file formats +natively. + +Many SoC vendors invent ways to load code into their SoC using new file formats, +sometimes changing the format with successive SoC generations. Sometimes the +tool is available as Open Source. Sometimes it is a pre-compiled binary that +must be downloaded from the vendor's website. Sometimes it is available in +source form but difficult or slow to build. + +Even for images that use bintools, binman still assembles the image from its +image description. It may handle parts of the image natively and part with +various bintools. + +Binman relies on these tools so provides various features to manage them: + +- Determining whether the tool is currently installed +- Downloading or building the tool +- Determining the version of the tool that is installed +- Deciding which tools are needed to build an image + +The Bintool class is an interface to the tool, a thin level of abstration, using +Python functions to run the tool for each purpose (e.g. creating a new +structure, adding a file to an existing structure) rather than just lists of +string arguments. + +As with external blobs, bintools (which are like 'external' tools) can be +missing. When building an image requires a bintool and it is not installed, +binman detects this and reports the problem, but continues to build an image. +This is useful in CI systems which want to check that everything is correct but +don't have access to the bintools. + +To make this work, all calls to bintools (e.g. with Bintool.run_cmd()) must cope +with the tool being missing, i.e. when None is returned, by: + +- Calling self.record_missing_bintool() +- Setting up some fake contents so binman can continue + +Of course the image will not work, but binman reports which bintools are needed +and also provide a way to fetch them. + +To see the available bintools, use:: + + binman tool --list + +To fetch tools which are missing, use:: + + binman tool --fetch missing + +You can also use `--fetch all` to fetch all tools or `--fetch <tool>` to fetch +a particular tool. Some tools are built from source code, in which case you will +need to have at least the `build-essential` and `git` packages installed. + +Bintool Documentation +===================== + +To provide details on the various bintools supported by binman, bintools.rst is +generated from the source code using: + + binman bintool-docs >tools/binman/bintools.rst + +.. toctree:: + :maxdepth: 2 + + bintools + + Technical details ================= @@ -1138,6 +1232,35 @@ To enable Python test coverage on Debian-type distributions (e.g. Ubuntu):: $ sudo apt-get install python-coverage python3-coverage python-pytest +Error messages +-------------- + +This section provides some guidance for some of the less obvious error messages +produced by binman. + + +Expected __bss_size symbol +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Example:: + + binman: Node '/binman/u-boot-spl-ddr/u-boot-spl/u-boot-spl-bss-pad': + Expected __bss_size symbol in spl/u-boot-spl + +This indicates that binman needs the `__bss_size` symbol to be defined in the +SPL binary, where `spl/u-boot-spl` is the ELF file containing the symbols. The +symbol tells binman the size of the BSS region, in bytes. It needs this to be +able to pad the image so that the following entries do not overlap the BSS, +which would cause them to be overwritte by variable access in SPL. + +This symbols is normally defined in the linker script, immediately after +_bss_start and __bss_end are defined, like this:: + + __bss_size = __bss_end - __bss_start; + +You may need to add it to your linker script if you get this error. + + Concurrent tests ---------------- diff --git a/tools/binman/bintool.py b/tools/binman/bintool.py new file mode 100644 index 00000000000..e2e5660d167 --- /dev/null +++ b/tools/binman/bintool.py @@ -0,0 +1,466 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Base class for all bintools + +This defines the common functionality for all bintools, including running +the tool, checking its version and fetching it if needed. +""" + +import collections +import glob +import importlib +import multiprocessing +import os +import shutil +import tempfile +import urllib.error + +from patman import command +from patman import terminal +from patman import tools +from patman import tout + +BINMAN_DIR = os.path.dirname(os.path.realpath(__file__)) + +# Format string for listing bintools, see also the header in list_all() +FORMAT = '%-16.16s %-12.12s %-26.26s %s' + +# List of known modules, to avoid importing the module multiple times +modules = {} + +# Possible ways of fetching a tool (FETCH_COUNT is number of ways) +FETCH_ANY, FETCH_BIN, FETCH_BUILD, FETCH_COUNT = range(4) + +FETCH_NAMES = { + FETCH_ANY: 'any method', + FETCH_BIN: 'binary download', + FETCH_BUILD: 'build from source' + } + +# Status of tool fetching +FETCHED, FAIL, PRESENT, STATUS_COUNT = range(4) + +DOWNLOAD_DESTDIR = os.path.join(os.getenv('HOME'), 'bin') + +class Bintool: + """Tool which operates on binaries to help produce entry contents + + This is the base class for all bintools + """ + # List of bintools to regard as missing + missing_list = [] + + def __init__(self, name, desc): + self.name = name + self.desc = desc + + @staticmethod + def find_bintool_class(btype): + """Look up the bintool class for bintool + + Args: + byte: Bintool to use, e.g. 'mkimage' + + Returns: + The bintool class object if found, else a tuple: + module name that could not be found + exception received + """ + # Convert something like 'u-boot' to 'u_boot' since we are only + # interested in the type. + module_name = btype.replace('-', '_') + module = modules.get(module_name) + + # Import the module if we have not already done so + if not module: + try: + module = importlib.import_module('binman.btool.' + module_name) + except ImportError as exc: + return module_name, exc + modules[module_name] = module + + # Look up the expected class name + return getattr(module, 'Bintool%s' % module_name) + + @staticmethod + def create(name): + """Create a new bintool object + + Args: + name (str): Bintool to create, e.g. 'mkimage' + + Returns: + A new object of the correct type (a subclass of Binutil) + """ + cls = Bintool.find_bintool_class(name) + if isinstance(cls, tuple): + raise ValueError("Cannot import bintool module '%s': %s" % cls) + + # Call its constructor to get the object we want. + obj = cls(name) + return obj + + def show(self): + """Show a line of information about a bintool""" + if self.is_present(): + version = self.version() + else: + version = '-' + print(FORMAT % (self.name, version, self.desc, + self.get_path() or '(not found)')) + + @classmethod + def set_missing_list(cls, missing_list): + cls.missing_list = missing_list or [] + + @staticmethod + def get_tool_list(include_testing=False): + """Get a list of the known tools + + Returns: + list of str: names of all tools known to binman + """ + files = glob.glob(os.path.join(BINMAN_DIR, 'btool/*')) + names = [os.path.splitext(os.path.basename(fname))[0] + for fname in files] + names = [name for name in names if name[0] != '_'] + if include_testing: + names.append('_testing') + return sorted(names) + + @staticmethod + def list_all(): + """List all the bintools known to binman""" + names = Bintool.get_tool_list() + print(FORMAT % ('Name', 'Version', 'Description', 'Path')) + print(FORMAT % ('-' * 15,'-' * 11, '-' * 25, '-' * 30)) + for name in names: + btool = Bintool.create(name) + btool.show() + + def is_present(self): + """Check if a bintool is available on the system + + Returns: + bool: True if available, False if not + """ + if self.name in self.missing_list: + return False + return bool(self.get_path()) + + def get_path(self): + """Get the path of a bintool + + Returns: + str: Path to the tool, if available, else None + """ + return tools.tool_find(self.name) + + def fetch_tool(self, method, col, skip_present): + """Fetch a single tool + + Args: + method (FETCH_...): Method to use + col (terminal.Color): Color terminal object + skip_present (boo;): Skip fetching if it is already present + + Returns: + int: Result of fetch either FETCHED, FAIL, PRESENT + """ + def try_fetch(meth): + res = None + try: + res = self.fetch(meth) + except urllib.error.URLError as uerr: + message = uerr.reason + print(col.Color(col.RED, f'- {message}')) + + except ValueError as exc: + print(f'Exception: {exc}') + return res + + if skip_present and self.is_present(): + return PRESENT + print(col.Color(col.YELLOW, 'Fetch: %s' % self.name)) + if method == FETCH_ANY: + for try_method in range(1, FETCH_COUNT): + print(f'- trying method: {FETCH_NAMES[try_method]}') + result = try_fetch(try_method) + if result: + break + else: + result = try_fetch(method) + if not result: + return FAIL + if result is not True: + fname, tmpdir = result + dest = os.path.join(DOWNLOAD_DESTDIR, self.name) + print(f"- writing to '{dest}'") + shutil.move(fname, dest) + if tmpdir: + shutil.rmtree(tmpdir) + return FETCHED + + @staticmethod + def fetch_tools(method, names_to_fetch): + """Fetch bintools from a suitable place + + This fetches or builds the requested bintools so that they can be used + by binman + + Args: + names_to_fetch (list of str): names of bintools to fetch + + Returns: + True on success, False on failure + """ + def show_status(color, prompt, names): + print(col.Color( + color, f'{prompt}:%s{len(names):2}: %s' % + (' ' * (16 - len(prompt)), ' '.join(names)))) + + col = terminal.Color() + skip_present = False + name_list = names_to_fetch + if len(names_to_fetch) == 1 and names_to_fetch[0] in ['all', 'missing']: + name_list = Bintool.get_tool_list() + if names_to_fetch[0] == 'missing': + skip_present = True + print(col.Color(col.YELLOW, + 'Fetching tools: %s' % ' '.join(name_list))) + status = collections.defaultdict(list) + for name in name_list: + btool = Bintool.create(name) + result = btool.fetch_tool(method, col, skip_present) + status[result].append(name) + if result == FAIL: + if method == FETCH_ANY: + print('- failed to fetch with all methods') + else: + print(f"- method '{FETCH_NAMES[method]}' is not supported") + + if len(name_list) > 1: + if skip_present: + show_status(col.GREEN, 'Already present', status[PRESENT]) + show_status(col.GREEN, 'Tools fetched', status[FETCHED]) + if status[FAIL]: + show_status(col.RED, 'Failures', status[FAIL]) + return not status[FAIL] + + def run_cmd_result(self, *args, binary=False, raise_on_error=True): + """Run the bintool using command-line arguments + + Args: + args (list of str): Arguments to provide, in addition to the bintool + name + binary (bool): True to return output as bytes instead of str + raise_on_error (bool): True to raise a ValueError exception if the + tool returns a non-zero return code + + Returns: + CommandResult: Resulting output from the bintool, or None if the + tool is not present + """ + if self.name in self.missing_list: + return None + name = os.path.expanduser(self.name) # Expand paths containing ~ + all_args = (name,) + args + env = tools.get_env_with_path() + tout.Detail(f"bintool: {' '.join(all_args)}") + result = command.RunPipe( + [all_args], capture=True, capture_stderr=True, env=env, + raise_on_error=False, binary=binary) + + if result.return_code: + # Return None if the tool was not found. In this case there is no + # output from the tool and it does not appear on the path. We still + # try to run it (as above) since RunPipe() allows faking the tool's + # output + if not any([result.stdout, result.stderr, tools.tool_find(name)]): + tout.Info(f"bintool '{name}' not found") + return None + if raise_on_error: + tout.Info(f"bintool '{name}' failed") + raise ValueError("Error %d running '%s': %s" % + (result.return_code, ' '.join(all_args), + result.stderr or result.stdout)) + if result.stdout: + tout.Debug(result.stdout) + if result.stderr: + tout.Debug(result.stderr) + return result + + def run_cmd(self, *args, binary=False): + """Run the bintool using command-line arguments + + Args: + args (list of str): Arguments to provide, in addition to the bintool + name + binary (bool): True to return output as bytes instead of str + + Returns: + str or bytes: Resulting stdout from the bintool + """ + result = self.run_cmd_result(*args, binary=binary) + if result: + return result.stdout + + @classmethod + def build_from_git(cls, git_repo, make_target, bintool_path): + """Build a bintool from a git repo + + This clones the repo in a temporary directory, builds it with 'make', + then returns the filename of the resulting executable bintool + + Args: + git_repo (str): URL of git repo + make_target (str): Target to pass to 'make' to build the tool + bintool_path (str): Relative path of the tool in the repo, after + build is complete + + Returns: + tuple: + str: Filename of fetched file to copy to a suitable directory + str: Name of temp directory to remove, or None + or None on error + """ + tmpdir = tempfile.mkdtemp(prefix='binmanf.') + print(f"- clone git repo '{git_repo}' to '{tmpdir}'") + tools.Run('git', 'clone', '--depth', '1', git_repo, tmpdir) + print(f"- build target '{make_target}'") + tools.Run('make', '-C', tmpdir, '-j', f'{multiprocessing.cpu_count()}', + make_target) + fname = os.path.join(tmpdir, bintool_path) + if not os.path.exists(fname): + print(f"- File '{fname}' was not produced") + return None + return fname, tmpdir + + @classmethod + def fetch_from_url(cls, url): + """Fetch a bintool from a URL + + Args: + url (str): URL to fetch from + + Returns: + tuple: + str: Filename of fetched file to copy to a suitable directory + str: Name of temp directory to remove, or None + """ + fname, tmpdir = tools.Download(url) + tools.Run('chmod', 'a+x', fname) + return fname, tmpdir + + @classmethod + def fetch_from_drive(cls, drive_id): + """Fetch a bintool from Google drive + + Args: + drive_id (str): ID of file to fetch. For a URL of the form + 'https://drive.google.com/file/d/xxx/view?usp=sharing' the value + passed here should be 'xxx' + + Returns: + tuple: + str: Filename of fetched file to copy to a suitable directory + str: Name of temp directory to remove, or None + """ + url = f'https://drive.google.com/uc?export=download&id={drive_id}' + return cls.fetch_from_url(url) + + @classmethod + def apt_install(cls, package): + """Install a bintool using the 'aot' tool + + This requires use of servo so may request a password + + Args: + package (str): Name of package to install + + Returns: + True, assuming it completes without error + """ + args = ['sudo', 'apt', 'install', '-y', package] + print('- %s' % ' '.join(args)) + tools.Run(*args) + return True + + @staticmethod + def WriteDocs(modules, test_missing=None): + """Write out documentation about the various bintools to stdout + + Args: + modules: List of modules to include + test_missing: Used for testing. This is a module to report + as missing + """ + print('''.. SPDX-License-Identifier: GPL-2.0+ + +Binman bintool Documentation +============================ + +This file describes the bintools (binary tools) supported by binman. Bintools +are binman's name for external executables that it runs to generate or process +binaries. It is fairly easy to create new bintools. Just add a new file to the +'btool' directory. You can use existing bintools as examples. + + +''') + modules = sorted(modules) + missing = [] + for name in modules: + module = Bintool.find_bintool_class(name) + docs = getattr(module, '__doc__') + if test_missing == name: + docs = None + if docs: + lines = docs.splitlines() + first_line = lines[0] + rest = [line[4:] for line in lines[1:]] + hdr = 'Bintool: %s: %s' % (name, first_line) + print(hdr) + print('-' * len(hdr)) + print('\n'.join(rest)) + print() + print() + else: + missing.append(name) + + if missing: + raise ValueError('Documentation is missing for modules: %s' % + ', '.join(missing)) + + # pylint: disable=W0613 + def fetch(self, method): + """Fetch handler for a bintool + + This should be implemented by the base class + + Args: + method (FETCH_...): Method to use + + Returns: + tuple: + str: Filename of fetched file to copy to a suitable directory + str: Name of temp directory to remove, or None + or True if the file was fetched and already installed + or None if no fetch() implementation is available + + Raises: + Valuerror: Fetching could not be completed + """ + print(f"No method to fetch bintool '{self.name}'") + return False + + # pylint: disable=R0201 + def version(self): + """Version handler for a bintool + + This should be implemented by the base class + + Returns: + str: Version string for this bintool + """ + return 'unknown' diff --git a/tools/binman/bintool_test.py b/tools/binman/bintool_test.py new file mode 100644 index 00000000000..3d6bcdab9d1 --- /dev/null +++ b/tools/binman/bintool_test.py @@ -0,0 +1,353 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# Written by Simon Glass <[email protected]> +# + +"""Tests for the Bintool class""" + +import collections +import os +import shutil +import tempfile +import unittest +import unittest.mock +import urllib.error + +from binman import bintool +from binman.bintool import Bintool + +from patman import command +from patman import terminal +from patman import test_util +from patman import tools + +# pylint: disable=R0904 +class TestBintool(unittest.TestCase): + """Tests for the Bintool class""" + def setUp(self): + # Create a temporary directory for test files + self._indir = tempfile.mkdtemp(prefix='bintool.') + self.seq = None + self.count = None + self.fname = None + self.btools = None + + def tearDown(self): + """Remove the temporary input directory and its contents""" + if self._indir: + shutil.rmtree(self._indir) + self._indir = None + + def test_missing_btype(self): + """Test that unknown bintool types are detected""" + with self.assertRaises(ValueError) as exc: + Bintool.create('missing') + self.assertIn("No module named 'binman.btool.missing'", + str(exc.exception)) + + def test_fresh_bintool(self): + """Check that the _testing bintool is not cached""" + btest = Bintool.create('_testing') + btest.present = True + btest2 = Bintool.create('_testing') + self.assertFalse(btest2.present) + + def test_version(self): + """Check handling of a tool being present or absent""" + btest = Bintool.create('_testing') + with test_util.capture_sys_output() as (stdout, _): + btest.show() + self.assertFalse(btest.is_present()) + self.assertIn('-', stdout.getvalue()) + btest.present = True + self.assertTrue(btest.is_present()) + self.assertEqual('123', btest.version()) + with test_util.capture_sys_output() as (stdout, _): + btest.show() + self.assertIn('123', stdout.getvalue()) + + def test_fetch_present(self): + """Test fetching of a tool""" + btest = Bintool.create('_testing') + btest.present = True + col = terminal.Color() + self.assertEqual(bintool.PRESENT, + btest.fetch_tool(bintool.FETCH_ANY, col, True)) + + @classmethod + def check_fetch_url(cls, fake_download, method): + """Check the output from fetching a tool + + Args: + fake_download (function): Function to call instead of + tools.Download() + method (bintool.FETCH_...: Fetch method to use + + Returns: + str: Contents of stdout + """ + btest = Bintool.create('_testing') + col = terminal.Color() + with unittest.mock.patch.object(tools, 'Download', + side_effect=fake_download): + with test_util.capture_sys_output() as (stdout, _): + btest.fetch_tool(method, col, False) + return stdout.getvalue() + + def test_fetch_url_err(self): + """Test an error while fetching a tool from a URL""" + def fail_download(url): + """Take the tools.Download() function by raising an exception""" + raise urllib.error.URLError('my error') + + stdout = self.check_fetch_url(fail_download, bintool.FETCH_ANY) + self.assertIn('my error', stdout) + + def test_fetch_url_exception(self): + """Test an exception while fetching a tool from a URL""" + def cause_exc(url): + raise ValueError('exc error') + + stdout = self.check_fetch_url(cause_exc, bintool.FETCH_ANY) + self.assertIn('exc error', stdout) + + def test_fetch_method(self): + """Test fetching using a particular method""" + def fail_download(url): + """Take the tools.Download() function by raising an exception""" + raise urllib.error.URLError('my error') + + stdout = self.check_fetch_url(fail_download, bintool.FETCH_BIN) + self.assertIn('my error', stdout) + + def test_fetch_pass_fail(self): + """Test fetching multiple tools with some passing and some failing""" + def handle_download(_): + """Take the tools.Download() function by writing a file""" + if self.seq: + raise urllib.error.URLError('not found') + self.seq += 1 + tools.WriteFile(fname, expected) + return fname, dirname + + expected = b'this is a test' + dirname = os.path.join(self._indir, 'download_dir') + os.mkdir(dirname) + fname = os.path.join(dirname, 'downloaded') + destdir = os.path.join(self._indir, 'dest_dir') + os.mkdir(destdir) + dest_fname = os.path.join(destdir, '_testing') + self.seq = 0 + + with unittest.mock.patch.object(bintool, 'DOWNLOAD_DESTDIR', destdir): + with unittest.mock.patch.object(tools, 'Download', + side_effect=handle_download): + with test_util.capture_sys_output() as (stdout, _): + Bintool.fetch_tools(bintool.FETCH_ANY, ['_testing'] * 2) + self.assertTrue(os.path.exists(dest_fname)) + data = tools.ReadFile(dest_fname) + self.assertEqual(expected, data) + + lines = stdout.getvalue().splitlines() + self.assertTrue(len(lines) > 2) + self.assertEqual('Tools fetched: 1: _testing', lines[-2]) + self.assertEqual('Failures: 1: _testing', lines[-1]) + + def test_tool_list(self): + """Test listing available tools""" + self.assertGreater(len(Bintool.get_tool_list()), 3) + + def check_fetch_all(self, method): + """Helper to check the operation of fetching all tools""" + + # pylint: disable=W0613 + def fake_fetch(method, col, skip_present): + """Fakes the Binutils.fetch() function + + Returns FETCHED and FAIL on alternate calls + """ + self.seq += 1 + result = bintool.FETCHED if self.seq & 1 else bintool.FAIL + self.count[result] += 1 + return result + + self.seq = 0 + self.count = collections.defaultdict(int) + with unittest.mock.patch.object(bintool.Bintool, 'fetch_tool', + side_effect=fake_fetch): + with test_util.capture_sys_output() as (stdout, _): + Bintool.fetch_tools(method, ['all']) + lines = stdout.getvalue().splitlines() + self.assertIn(f'{self.count[bintool.FETCHED]}: ', lines[-2]) + self.assertIn(f'{self.count[bintool.FAIL]}: ', lines[-1]) + + def test_fetch_all(self): + """Test fetching all tools""" + self.check_fetch_all(bintool.FETCH_ANY) + + def test_fetch_all_specific(self): + """Test fetching all tools with a specific method""" + self.check_fetch_all(bintool.FETCH_BIN) + + def test_fetch_missing(self): + """Test fetching missing tools""" + # pylint: disable=W0613 + def fake_fetch2(method, col, skip_present): + """Fakes the Binutils.fetch() function + + Returns PRESENT only for the '_testing' bintool + """ + btool = list(self.btools.values())[self.seq] + self.seq += 1 + print('fetch', btool.name) + if btool.name == '_testing': + return bintool.PRESENT + return bintool.FETCHED + + # Preload a list of tools to return when get_tool_list() and create() + # are called + all_tools = Bintool.get_tool_list(True) + self.btools = collections.OrderedDict() + for name in all_tools: + self.btools[name] = Bintool.create(name) + self.seq = 0 + with unittest.mock.patch.object(bintool.Bintool, 'fetch_tool', + side_effect=fake_fetch2): + with unittest.mock.patch.object(bintool.Bintool, + 'get_tool_list', + side_effect=[all_tools]): + with unittest.mock.patch.object(bintool.Bintool, 'create', + side_effect=self.btools.values()): + with test_util.capture_sys_output() as (stdout, _): + Bintool.fetch_tools(bintool.FETCH_ANY, ['missing']) + lines = stdout.getvalue().splitlines() + num_tools = len(self.btools) + fetched = [line for line in lines if 'Tools fetched:' in line].pop() + present = [line for line in lines if 'Already present:' in line].pop() + self.assertIn(f'{num_tools - 1}: ', fetched) + self.assertIn('1: ', present) + + def check_build_method(self, write_file): + """Check the output from fetching using the BUILD method + + Args: + write_file (bool): True to write the output file when 'make' is + called + + Returns: + tuple: + str: Filename of written file (or missing 'make' output) + str: Contents of stdout + """ + def fake_run(*cmd): + if cmd[0] == 'make': + # See Bintool.build_from_git() + tmpdir = cmd[2] + self.fname = os.path.join(tmpdir, 'pathname') + if write_file: + tools.WriteFile(self.fname, b'hello') + + btest = Bintool.create('_testing') + col = terminal.Color() + self.fname = None + with unittest.mock.patch.object(bintool, 'DOWNLOAD_DESTDIR', + self._indir): + with unittest.mock.patch.object(tools, 'Run', side_effect=fake_run): + with test_util.capture_sys_output() as (stdout, _): + btest.fetch_tool(bintool.FETCH_BUILD, col, False) + fname = os.path.join(self._indir, '_testing') + return fname if write_file else self.fname, stdout.getvalue() + + def test_build_method(self): + """Test fetching using the build method""" + fname, stdout = self.check_build_method(write_file=True) + self.assertTrue(os.path.exists(fname)) + self.assertIn(f"writing to '{fname}", stdout) + + def test_build_method_fail(self): + """Test fetching using the build method when no file is produced""" + fname, stdout = self.check_build_method(write_file=False) + self.assertFalse(os.path.exists(fname)) + self.assertIn(f"File '{fname}' was not produced", stdout) + + def test_install(self): + """Test fetching using the install method""" + btest = Bintool.create('_testing') + btest.install = True + col = terminal.Color() + with unittest.mock.patch.object(tools, 'Run', return_value=None): + with test_util.capture_sys_output() as _: + result = btest.fetch_tool(bintool.FETCH_BIN, col, False) + self.assertEqual(bintool.FETCHED, result) + + def test_no_fetch(self): + """Test fetching when there is no method""" + btest = Bintool.create('_testing') + btest.disable = True + col = terminal.Color() + with test_util.capture_sys_output() as _: + result = btest.fetch_tool(bintool.FETCH_BIN, col, False) + self.assertEqual(bintool.FAIL, result) + + def test_all_bintools(self): + """Test that all bintools can handle all available fetch types""" + def handle_download(_): + """Take the tools.Download() function by writing a file""" + tools.WriteFile(fname, expected) + return fname, dirname + + def fake_run(*cmd): + if cmd[0] == 'make': + # See Bintool.build_from_git() + tmpdir = cmd[2] + self.fname = os.path.join(tmpdir, 'pathname') + tools.WriteFile(self.fname, b'hello') + + expected = b'this is a test' + dirname = os.path.join(self._indir, 'download_dir') + os.mkdir(dirname) + fname = os.path.join(dirname, 'downloaded') + + with unittest.mock.patch.object(tools, 'Run', side_effect=fake_run): + with unittest.mock.patch.object(tools, 'Download', + side_effect=handle_download): + with test_util.capture_sys_output() as _: + for name in Bintool.get_tool_list(): + btool = Bintool.create(name) + for method in range(bintool.FETCH_COUNT): + result = btool.fetch(method) + self.assertTrue(result is not False) + if result is not True and result is not None: + result_fname, _ = result + self.assertTrue(os.path.exists(result_fname)) + data = tools.ReadFile(result_fname) + self.assertEqual(expected, data) + os.remove(result_fname) + + def test_all_bintool_versions(self): + """Test handling of bintool version when it cannot be run""" + all_tools = Bintool.get_tool_list() + for name in all_tools: + btool = Bintool.create(name) + with unittest.mock.patch.object( + btool, 'run_cmd_result', return_value=command.CommandResult()): + self.assertEqual('unknown', btool.version()) + + def test_force_missing(self): + btool = Bintool.create('_testing') + btool.present = True + self.assertTrue(btool.is_present()) + + btool.present = None + Bintool.set_missing_list(['_testing']) + self.assertFalse(btool.is_present()) + + def test_failed_command(self): + """Check that running a command that does not exist returns None""" + btool = Bintool.create('_testing') + result = btool.run_cmd_result('fred') + self.assertIsNone(result) + + +if __name__ == "__main__": + unittest.main() diff --git a/tools/binman/bintools.rst b/tools/binman/bintools.rst new file mode 100644 index 00000000000..edb373ab59b --- /dev/null +++ b/tools/binman/bintools.rst @@ -0,0 +1,115 @@ +.. SPDX-License-Identifier: GPL-2.0+ + +Binman bintool Documentation +============================ + +This file describes the bintools (binary tools) supported by binman. Bintools +are binman's name for external executables that it runs to generate or process +binaries. It is fairly easy to create new bintools. Just add a new file to the +'btool' directory. You can use existing bintools as examples. + + + +Bintool: cbfstool: Coreboot filesystem (CBFS) tool +-------------------------------------------------- + +This bintool supports creating new CBFS images and adding files to an +existing image, i.e. the features needed by binman. + +It also supports fetching a binary cbfstool, since building it from source +is fairly slow. + +Documentation about CBFS is at https://www.coreboot.org/CBFS + + + +Bintool: fiptool: Image generation for ARM Trusted Firmware +----------------------------------------------------------- + +This bintool supports running `fiptool` with some basic parameters as +neeed by binman. + +It also supports build fiptool from source. + +fiptool provides a way to package firmware in an ARM Trusted Firmware +Firmware Image Package (ATF FIP) format. It is used with Trusted Firmware A, +for example. + +See `TF-A FIP tool documentation`_ for more information. + +.. _`TF-A FIP tool documentation`: + https://trustedfirmware-a.readthedocs.io/en/latest/getting_started/tools-build.html?highlight=fiptool#building-and-using-the-fip-tool + + + +Bintool: futility: Handles the 'futility' tool +---------------------------------------------- + +futility (flash utility) is a tool for working with Chromium OS flash +images. This Bintool implements just the features used by Binman, related to +GBB creation and firmware signing. + +A binary version of the tool can be fetched. + +See `Chromium OS vboot documentation`_ for more information. + +.. _`Chromium OS vboot documentation`: + https://chromium.googlesource.com/chromiumos/platform/vboot/+/refs/heads/main/_vboot_reference/README + + + +Bintool: ifwitool: Handles the 'ifwitool' tool +---------------------------------------------- + +This bintool supports running `ifwitool` with some basic parameters as +neeed by binman. It includes creating a file from a FIT as well as adding, +replacing, deleting and extracting subparts. + +The tool is built as part of U-Boot, but a binary version can be fetched if +required. + +ifwitool provides a way to package firmware in an Intel Firmware Image +(IFWI) file on some Intel SoCs, e.g. Apolo Lake. + + + +Bintool: lz4: Compression/decompression using the LZ4 algorithm +--------------------------------------------------------------- + +This bintool supports running `lz4` to compress and decompress data, as +used by binman. + +It is also possible to fetch the tool, which uses `apt` to install it. + +Documentation is available via:: + + man lz4 + + + +Bintool: lzma_alone: Compression/decompression using the LZMA algorithm +----------------------------------------------------------------------- + +This bintool supports running `lzma_alone` to compress and decompress data, +as used by binman. + +It is also possible to fetch the tool, which uses `apt` to install it. + +Documentation is available via:: + + man lzma_alone + + + +Bintool: mkimage: Image generation for U-Boot +--------------------------------------------- + +This bintool supports running `mkimage` with some basic parameters as +neeed by binman. + +Normally binman uses the mkimage built by U-Boot. But when run outside the +U-Boot build system, binman can use the version installed in your system. +Support is provided for fetching this on Debian-like systems, using apt. + + + diff --git a/tools/binman/btool/_testing.py b/tools/binman/btool/_testing.py new file mode 100644 index 00000000000..4005e8a8a5d --- /dev/null +++ b/tools/binman/btool/_testing.py @@ -0,0 +1,36 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Bintool used for testing + +This is not a real bintool, just one used for testing""" + +from binman import bintool + +# pylint: disable=C0103 +class Bintool_testing(bintool.Bintool): + """Bintool used for testing""" + def __init__(self, name): + super().__init__(name, 'testing') + self.present = False + self.install = False + self.disable = False + + def is_present(self): + if self.present is None: + return super().is_present() + return self.present + + def version(self): + return '123' + + def fetch(self, method): + if self.disable: + return super().fetch(method) + if method == bintool.FETCH_BIN: + if self.install: + return self.apt_install('package') + return self.fetch_from_drive('junk') + if method == bintool.FETCH_BUILD: + return self.build_from_git('url', 'target', 'pathname') + return None diff --git a/tools/binman/btool/cbfstool.py b/tools/binman/btool/cbfstool.py new file mode 100644 index 00000000000..29be2d8a2b5 --- /dev/null +++ b/tools/binman/btool/cbfstool.py @@ -0,0 +1,219 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Bintool implementation for cbfstool + +cfstool provides a number of features useful with Coreboot Filesystem binaries. + +Documentation is at https://www.coreboot.org/CBFS + +Source code is at https://github.com/coreboot/coreboot/blob/master/util/cbfstool/cbfstool.c + +Here is the help: + +cbfstool: Management utility for CBFS formatted ROM images + +USAGE: + cbfstool [-h] + cbfstool FILE COMMAND [-v] [PARAMETERS]... + +OPTIONs: + -H header_offset Do not search for header; use this offset* + -T Output top-aligned memory address + -u Accept short data; fill upward/from bottom + -d Accept short data; fill downward/from top + -F Force action + -g Generate position and alignment arguments + -U Unprocessed; don't decompress or make ELF + -v Provide verbose output + -h Display this help message + +COMMANDs: + add [-r image,regions] -f FILE -n NAME -t TYPE [-A hash] \ + [-c compression] [-b base-address | -a alignment] \ + [-p padding size] [-y|--xip if TYPE is FSP] \ + [-j topswap-size] (Intel CPUs only) [--ibb] + Add a component + -j valid size: 0x10000 0x20000 0x40000 0x80000 0x100000 + add-payload [-r image,regions] -f FILE -n NAME [-A hash] \ + [-c compression] [-b base-address] \ + (linux specific: [-C cmdline] [-I initrd]) + Add a payload to the ROM + add-stage [-r image,regions] -f FILE -n NAME [-A hash] \ + [-c compression] [-b base] [-S section-to-ignore] \ + [-a alignment] [-y|--xip] [-P page-size] [--ibb] + Add a stage to the ROM + add-flat-binary [-r image,regions] -f FILE -n NAME \ + [-A hash] -l load-address -e entry-point \ + [-c compression] [-b base] + Add a 32bit flat mode binary + add-int [-r image,regions] -i INTEGER -n NAME [-b base] + Add a raw 64-bit integer value + add-master-header [-r image,regions] \ + [-j topswap-size] (Intel CPUs only) + Add a legacy CBFS master header + remove [-r image,regions] -n NAME + Remove a component + compact -r image,regions + Defragment CBFS image. + copy -r image,regions -R source-region + Create a copy (duplicate) cbfs instance in fmap + create -m ARCH -s size [-b bootblock offset] \ + [-o CBFS offset] [-H header offset] [-B bootblock] + Create a legacy ROM file with CBFS master header* + create -M flashmap [-r list,of,regions,containing,cbfses] + Create a new-style partitioned firmware image + locate [-r image,regions] -f FILE -n NAME [-P page-size] \ + [-a align] [-T] + Find a place for a file of that size + layout [-w] + List mutable (or, with -w, readable) image regions + print [-r image,regions] + Show the contents of the ROM + extract [-r image,regions] [-m ARCH] -n NAME -f FILE [-U] + Extracts a file from ROM + write [-F] -r image,regions -f file [-u | -d] [-i int] + Write file into same-size [or larger] raw region + read [-r fmap-region] -f file + Extract raw region contents into binary file + truncate [-r fmap-region] + Truncate CBFS and print new size on stdout + expand [-r fmap-region] + Expand CBFS to span entire region +OFFSETs: + Numbers accompanying -b, -H, and -o switches* may be provided + in two possible formats: if their value is greater than + 0x80000000, they are interpreted as a top-aligned x86 memory + address; otherwise, they are treated as an offset into flash. +ARCHes: + arm64, arm, mips, ppc64, power8, riscv, x86, unknown +TYPEs: + bootblock, cbfs header, stage, simple elf, fit, optionrom, bootsplash, raw, + vsa, mbi, microcode, fsp, mrc, cmos_default, cmos_layout, spd, + mrc_cache, mma, efi, struct, deleted, null + +* Note that these actions and switches are only valid when + working with legacy images whose structure is described + primarily by a CBFS master header. New-style images, in + contrast, exclusively make use of an FMAP to describe their + layout: this must minimally contain an 'FMAP' section + specifying the location of this FMAP itself and a 'COREBOOT' + section describing the primary CBFS. It should also be noted + that, when working with such images, the -F and -r switches + default to 'COREBOOT' for convenience, and both the -b switch to + CBFS operations and the output of the locate action become + relative to the selected CBFS region's lowest address. + The one exception to this rule is the top-aligned address, + which is always relative to the end of the entire image + rather than relative to the local region; this is true for + for both input (sufficiently large) and output (-T) data. + + +Since binman has a native implementation of CBFS (see cbfs_util.py), we don't +actually need this tool, except for sanity checks in the tests. +""" + +from binman import bintool + +class Bintoolcbfstool(bintool.Bintool): + """Coreboot filesystem (CBFS) tool + + This bintool supports creating new CBFS images and adding files to an + existing image, i.e. the features needed by binman. + + It also supports fetching a binary cbfstool, since building it from source + is fairly slow. + + Documentation about CBFS is at https://www.coreboot.org/CBFS + """ + def __init__(self, name): + super().__init__(name, 'Manipulate CBFS files') + + def create_new(self, cbfs_fname, size, arch='x86'): + """Create a new CBFS + + Args: + cbfs_fname (str): Filename of CBFS to create + size (int): Size of CBFS in bytes + arch (str): Architecture for which this CBFS is intended + + Returns: + str: Tool output + """ + args = [cbfs_fname, 'create', '-s', f'{size:#x}', '-m', arch] + return self.run_cmd(*args) + + # pylint: disable=R0913 + def add_raw(self, cbfs_fname, name, fname, compress=None, base=None): + """Add a raw file to the CBFS + + Args: + cbfs_fname (str): Filename of CBFS to create + name (str): Name to use inside the CBFS + fname (str): Filename of file to add + compress (str): Compression to use (cbfs_util.COMPRESS_NAMES) or + None for None + base (int): Address to place the file, or None for anywhere + + Returns: + str: Tool output + """ + args = [cbfs_fname, + 'add', + '-n', name, + '-t', 'raw', + '-f', fname, + '-c', compress or 'none'] + if base: + args += ['-b', f'{base:#x}'] + return self.run_cmd(*args) + + def add_stage(self, cbfs_fname, name, fname): + """Add a stage file to the CBFS + + Args: + cbfs_fname (str): Filename of CBFS to create + name (str): Name to use inside the CBFS + fname (str): Filename of file to add + + Returns: + str: Tool output + """ + args = [cbfs_fname, + 'add-stage', + '-n', name, + '-f', fname + ] + return self.run_cmd(*args) + + def fail(self): + """Run cbfstool with invalid arguments to check it reports failure + + This is really just a sanity check + + Returns: + CommandResult: Result from running the bad command + """ + args = ['missing-file', 'bad-command'] + return self.run_cmd_result(*args) + + def fetch(self, method): + """Fetch handler for cbfstool + + This installs cbfstool by downloading from Google Drive. + + Args: + method (FETCH_...): Method to use + + Returns: + True if the file was fetched and now installed, None if a method + other than FETCH_BIN was requested + + Raises: + Valuerror: Fetching could not be completed + """ + if method != bintool.FETCH_BIN: + return None + fname, tmpdir = self.fetch_from_drive( + '1IOnE0Qvy97d-0WOCwF64xBGpKSY2sMtJ') + return fname, tmpdir diff --git a/tools/binman/btool/fiptool.py b/tools/binman/btool/fiptool.py new file mode 100644 index 00000000000..c6d71cebc21 --- /dev/null +++ b/tools/binman/btool/fiptool.py @@ -0,0 +1,123 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Bintool implementation for fiptool + +fiptool provides a way to package firmware in an ARM Trusted Firmware Firmware +Image Package (ATF FIP) format. It is used with Trusted Firmware A, for example. + +Documentation is at: +https://trustedfirmware-a.readthedocs.io/en/latest/getting_started/tools-build.html?highlight=fiptool#building-and-using-the-fip-tool + +Source code is at: +https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git + +Here is the help: + +usage: fiptool [--verbose] <command> [<args>] +Global options supported: + --verbose Enable verbose output for all commands. + +Commands supported: + info List images contained in FIP. + create Create a new FIP with the given images. + update Update an existing FIP with the given images. + unpack Unpack images from FIP. + remove Remove images from FIP. + version Show fiptool version. + help Show help for given command. + +""" + +from binman import bintool + +class Bintoolfiptool(bintool.Bintool): + """Image generation for ARM Trusted Firmware + + This bintool supports running `fiptool` with some basic parameters as + neeed by binman. + + It also supports build fiptool from source. + + fiptool provides a way to package firmware in an ARM Trusted Firmware + Firmware Image Package (ATF FIP) format. It is used with Trusted Firmware A, + for example. + + See `TF-A FIP tool documentation`_ for more information. + + .. _`TF-A FIP tool documentation`: + https://trustedfirmware-a.readthedocs.io/en/latest/getting_started/tools-build.html?highlight=fiptool#building-and-using-the-fip-tool + """ + def __init__(self, name): + super().__init__(name, 'Manipulate ATF FIP files') + + def info(self, fname): + """Get info on a FIP image + + Args: + fname (str): Filename to check + + Returns: + str: Tool output + """ + args = ['info', fname] + return self.run_cmd(*args) + + # pylint: disable=R0913 + def create_new(self, fname, align, plat_toc_flags, fwu, tb_fw, blob_uuid, + blob_file): + """Create a new FIP + + Args: + fname (str): Filename to write to + align (int): Alignment to use for entries + plat_toc_flags (int): Flags to use for the TOC header + fwu (str): Filename for the fwu entry + tb_fw (str): Filename for the tb_fw entry + blob_uuid (str): UUID for the blob entry + blob_file (str): Filename for the blob entry + + Returns: + str: Tool output + """ + args = [ + 'create', + '--align', f'{align:x}', + '--plat-toc-flags', f'{plat_toc_flags:#x}', + '--fwu', fwu, + '--tb-fw', tb_fw, + '--blob', f'uuid={blob_uuid},file={blob_file}', + fname] + return self.run_cmd(*args) + + def create_bad(self): + """Run fiptool with invalid arguments""" + args = ['create', '--fred'] + return self.run_cmd_result(*args) + + def fetch(self, method): + """Fetch handler for fiptool + + This builds the tool from source + + Returns: + tuple: + str: Filename of fetched file to copy to a suitable directory + str: Name of temp directory to remove, or None + """ + if method != bintool.FETCH_BUILD: + return None + result = self.build_from_git( + 'https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git', + 'fiptool', + 'tools/fiptool/fiptool') + return result + + def version(self): + """Version handler for fiptool + + Returns: + str: Version number of fiptool + """ + out = self.run_cmd('version').strip() + return out or super().version() diff --git a/tools/binman/btool/futility.py b/tools/binman/btool/futility.py new file mode 100644 index 00000000000..614daaade43 --- /dev/null +++ b/tools/binman/btool/futility.py @@ -0,0 +1,178 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Bintool implementation for futility + +futility (flash utility) is a tool for working with Chromium OS flash images. +This implements just the features used by Binman. + +Documentation is at: + https://chromium.googlesource.com/chromiumos/platform/vboot/+/refs/heads/main/_vboot_reference/README + +Source code: + https://chromium.googlesource.com/chromiumos/platform/vboot/+/refs/heads/master/_vboot_reference/futility + +Here is the help: +Usage: futility [options] COMMAND [args...] + +This is the unified firmware utility, which will eventually replace +most of the distinct verified boot tools formerly produced by the +vboot_reference package. + +When symlinked under the name of one of those previous tools, it should +fully implement the original behavior. It can also be invoked directly +as futility, followed by the original name as the first argument. + +Global options: + + --vb1 Use only vboot v1.0 binary formats + --vb21 Use only vboot v2.1 binary formats + --debug Be noisy about what's going on + +The following commands are built-in: + + bdb Common boot flow utility + create Create a keypair from an RSA .pem file + dump_fmap Display FMAP contents from a firmware image + dump_kernel_config Prints the kernel command line + gbb Manipulate the Google Binary Block (GBB) + gbb_utility Legacy name for `gbb` command + help Show a bit of help (you're looking at it) + load_fmap Replace the contents of specified FMAP areas + pcr Simulate a TPM PCR extension operation + show Display the content of various binary components + sign Sign / resign various binary components + update Update system firmware + validate_rec_mrc Validates content of Recovery MRC cache + vbutil_firmware Verified boot firmware utility + vbutil_kernel Creates, signs, and verifies the kernel partition + vbutil_key Wraps RSA keys with vboot headers + vbutil_keyblock Creates, signs, and verifies a keyblock + verify Verify the signatures of various binary components + version Show the futility source revision and build date +""" + +from binman import bintool + +class Bintoolfutility(bintool.Bintool): + """Handles the 'futility' tool + + futility (flash utility) is a tool for working with Chromium OS flash + images. This Bintool implements just the features used by Binman, related to + GBB creation and firmware signing. + + A binary version of the tool can be fetched. + + See `Chromium OS vboot documentation`_ for more information. + + .. _`Chromium OS vboot documentation`: + https://chromium.googlesource.com/chromiumos/platform/vboot/+/refs/heads/main/_vboot_reference/README + """ + def __init__(self, name): + super().__init__(name, 'Chromium OS firmware utility') + + def gbb_create(self, fname, sizes): + """Create a new Google Binary Block + + Args: + fname (str): Filename to write to + sizes (list of int): Sizes of each regions: + hwid_size, rootkey_size, bmpfv_size, recoverykey_size + + Returns: + str: Tool output + """ + args = [ + 'gbb_utility', + '-c', + ','.join(['%#x' % size for size in sizes]), + fname + ] + return self.run_cmd(*args) + + # pylint: disable=R0913 + def gbb_set(self, fname, hwid, rootkey, recoverykey, flags, bmpfv): + """Set the parameters in a Google Binary Block + + Args: + fname (str): Filename to update + hwid (str): Hardware ID to use + rootkey (str): Filename of root key, e.g. 'root_key.vbpubk' + recoverykey (str): Filename of recovery key, + e.g. 'recovery_key.vbpubk' + flags (int): GBB flags to use + bmpfv (str): Filename of firmware bitmaps (bmpblk file) + + Returns: + str: Tool output + """ + args = ['gbb_utility' + '-s', + f'--hwid={hwid}', + f'--rootkey={rootkey}', + f'--recoverykey={recoverykey}', + f'--flags={flags}', + f'--bmpfv={bmpfv}', + fname + ] + return self.run_cmd(*args) + + def sign_firmware(self, vblock, keyblock, signprivate, version, firmware, + kernelkey, flags): + """Sign firmware to create a vblock file + + Args: + vblock (str): Filename to write the vblock too + keyblock (str): Filename of keyblock file + signprivate (str): Filename of private key + version (int): Version number + firmware (str): Filename of firmware binary to sign + kernelkey (str): Filename of kernel key + flags (int): Preamble flags + + Returns: + str: Tool output + """ + args = [ + 'vbutil_firmware', + '--vblock', vblock, + '--keyblock', keyblock, + '--signprivate', signprivate, + '--version', version, + '--fw', firmware, + '--kernelkey', kernelkey, + '--flags', flags + ] + return self.run_cmd(*args) + + def fetch(self, method): + """Fetch handler for futility + + This installs futility using a binary download. + + Args: + method (FETCH_...): Method to use + + Returns: + True if the file was fetched, None if a method other than FETCH_BIN + was requested + + Raises: + Valuerror: Fetching could not be completed + """ + if method != bintool.FETCH_BIN: + return None + fname, tmpdir = self.fetch_from_drive( + '1hdsInzsE4aJbmBeJ663kYgjOQyW1I-E0') + return fname, tmpdir + + def version(self): + """Version handler for futility + + Returns: + str: Version string for futility + """ + out = self.run_cmd('version').strip() + if not out: + return super().version() + return out diff --git a/tools/binman/btool/ifwitool.py b/tools/binman/btool/ifwitool.py new file mode 100644 index 00000000000..96778fce87f --- /dev/null +++ b/tools/binman/btool/ifwitool.py @@ -0,0 +1,166 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Bintool implementation for ifwitool + +ifwitool provides a way to package firmware in an Intel Firmware Image (IFWI) +file on some Intel SoCs, e.g. Apolo Lake. + +Documentation is not really available so far as I can tell + +Source code is at tools/ifwitool.c which is a cleaned-up version of +https://github.com/coreboot/coreboot/blob/master/util/cbfstool/ifwitool.c + +Here is the help: + +ifwitool: Utility for IFWI manipulation + +USAGE: + /tmp/b/sandbox/tools/ifwitool [-h] + /tmp/b/sandbox/tools/ifwitool FILE COMMAND [PARAMETERS] + +COMMANDs: + add -f FILE -n NAME [-d -e ENTRY] + create -f FILE + delete -n NAME + extract -f FILE -n NAME [-d -e ENTRY] + print [-d] + replace -f FILE -n NAME [-d -e ENTRY] +OPTIONs: + -f FILE : File to read/write/create/extract + -d : Perform directory operation + -e ENTRY: Name of directory entry to operate on + -v : Verbose level + -h : Help message + -n NAME : Name of sub-partition to operate on + +NAME should be one of: +SMIP(SMIP) +RBEP(CSE_RBE) +FTPR(CSE_BUP) +UCOD(Microcode) +IBBP(Bootblock) +S_BPDT(S-BPDT) +OBBP(OEM boot block) +NFTP(CSE_MAIN) +ISHP(ISH) +DLMP(CSE_IDLM) +IFP_OVERRIDE(IFP_OVERRIDE) +DEBUG_TOKENS(Debug Tokens) +UFS_PHY(UFS Phy) +UFS_GPP(UFS GPP) +PMCP(PMC firmware) +IUNP(IUNIT) +NVM_CONFIG(NVM Config) +UEP(UEP) +UFS_RATE_B(UFS Rate B Config) +""" + +from binman import bintool + +class Bintoolifwitool(bintool.Bintool): + """Handles the 'ifwitool' tool + + This bintool supports running `ifwitool` with some basic parameters as + neeed by binman. It includes creating a file from a FIT as well as adding, + replacing, deleting and extracting subparts. + + The tool is built as part of U-Boot, but a binary version can be fetched if + required. + + ifwitool provides a way to package firmware in an Intel Firmware Image + (IFWI) file on some Intel SoCs, e.g. Apolo Lake. + """ + def __init__(self, name): + super().__init__(name, 'Manipulate Intel IFWI files') + + def create_ifwi(self, intel_fit, ifwi_file): + """Create a new IFWI file, using an existing Intel FIT binary + + Args: + intel_fit (str): Filename of exist Intel FIT file + ifwi_file (str): Output filename to write the new IFWI too + + Returns: + str: Tool output + """ + args = [intel_fit, 'create', '-f', ifwi_file] + return self.run_cmd(*args) + + def delete_subpart(self, ifwi_file, subpart): + """Delete a subpart within the IFWI file + + Args: + ifwi_file (str): IFWI filename to update + subpart (str): Name of subpart to delete, e.g. 'OBBP' + + Returns: + str: Tool output + """ + args = [ifwi_file, 'delete', '-n', subpart] + return self.run_cmd(*args) + + # pylint: disable=R0913 + def add_subpart(self, ifwi_file, subpart, entry_name, infile, + replace=False): + """Add or replace a subpart within the IFWI file + + Args: + ifwi_file (str): IFWI filename to update + subpart (str): Name of subpart to add/replace + entry_nme (str): Name of entry to add/replace + replace (bool): True to replace the existing entry, False to add a + new one + + Returns: + str: Tool output + """ + args = [ + ifwi_file, + 'replace' if replace else 'add', + '-n', subpart, + '-d', '-e', entry_name, + '-f', infile, + ] + return self.run_cmd(*args) + + def extract(self, ifwi_file, subpart, entry_name, outfile): + """Extract a subpart from the IFWI file + + Args: + ifwi_file (str): IFWI filename to extract from + subpart (str): Name of subpart to extract + entry_nme (str): Name of entry to extract + + Returns: + str: Tool output + """ + args = [ + ifwi_file, + 'extract', + '-n', subpart, + '-d', '-e', entry_name, + '-f', outfile, + ] + return self.run_cmd(*args) + + def fetch(self, method): + """Fetch handler for ifwitool + + This installs ifwitool using a binary download. + + Args: + method (FETCH_...): Method to use + + Returns: + True if the file was fetched, None if a method other than FETCH_BIN + was requested + + Raises: + Valuerror: Fetching could not be completed + """ + if method != bintool.FETCH_BIN: + return None + fname, tmpdir = self.fetch_from_drive( + '18JDghOxlt2Hcc5jv51O1t6uNVHQ0XKJS') + return fname, tmpdir diff --git a/tools/binman/btool/lz4.py b/tools/binman/btool/lz4.py new file mode 100644 index 00000000000..d165f52da92 --- /dev/null +++ b/tools/binman/btool/lz4.py @@ -0,0 +1,140 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Bintool implementation for lz4 + +lz4 allows compression and decompression of files. + +Documentation is available via:: + + man lz4 + +Here is the help: + +*** LZ4 command line interface 64-bits v1.9.3, by Yann Collet *** +Usage : + lz4 [arg] [input] [output] + +input : a filename + with no FILE, or when FILE is - or stdin, read standard input +Arguments : + -1 : Fast compression (default) + -9 : High compression + -d : decompression (default for .lz4 extension) + -z : force compression + -D FILE: use FILE as dictionary + -f : overwrite output without prompting + -k : preserve source files(s) (default) +--rm : remove source file(s) after successful de/compression + -h/-H : display help/long help and exit + +Advanced arguments : + -V : display Version number and exit + -v : verbose mode + -q : suppress warnings; specify twice to suppress errors too + -c : force write to standard output, even if it is the console + -t : test compressed file integrity + -m : multiple input files (implies automatic output filenames) + -r : operate recursively on directories (sets also -m) + -l : compress using Legacy format (Linux kernel compression) + -B# : cut file into blocks of size # bytes [32+] + or predefined block size [4-7] (default: 7) + -BI : Block Independence (default) + -BD : Block dependency (improves compression ratio) + -BX : enable block checksum (default:disabled) +--no-frame-crc : disable stream checksum (default:enabled) +--content-size : compressed frame includes original size (default:not present) +--list FILE : lists information about .lz4 files (useful for files compressed + with --content-size flag) +--[no-]sparse : sparse mode (default:enabled on file, disabled on stdout) +--favor-decSpeed: compressed files decompress faster, but are less compressed +--fast[=#]: switch to ultra fast compression level (default: 1) +--best : same as -12 +Benchmark arguments : + -b# : benchmark file(s), using # compression level (default : 1) + -e# : test all compression levels from -bX to # (default : 1) + -i# : minimum evaluation time in seconds (default : 3s) +""" + +import re +import tempfile + +from binman import bintool +from patman import tools + +# pylint: disable=C0103 +class Bintoollz4(bintool.Bintool): + """Compression/decompression using the LZ4 algorithm + + This bintool supports running `lz4` to compress and decompress data, as + used by binman. + + It is also possible to fetch the tool, which uses `apt` to install it. + + Documentation is available via:: + + man lz4 + """ + def __init__(self, name): + super().__init__(name, 'lz4 compression') + + def compress(self, indata): + """Compress data with lz4 + + Args: + indata (bytes): Data to compress + + Returns: + bytes: Compressed data + """ + with tempfile.NamedTemporaryFile(prefix='comp.tmp', + dir=tools.GetOutputDir()) as tmp: + tools.WriteFile(tmp.name, indata) + args = ['--no-frame-crc', '-B4', '-5', '-c', tmp.name] + return self.run_cmd(*args, binary=True) + + def decompress(self, indata): + """Decompress data with lz4 + + Args: + indata (bytes): Data to decompress + + Returns: + bytes: Decompressed data + """ + with tempfile.NamedTemporaryFile(prefix='decomp.tmp', + dir=tools.GetOutputDir()) as inf: + tools.WriteFile(inf.name, indata) + args = ['-cd', inf.name] + return self.run_cmd(*args, binary=True) + + def fetch(self, method): + """Fetch handler for lz4 + + This installs the lz4 package using the apt utility. + + Args: + method (FETCH_...): Method to use + + Returns: + True if the file was fetched and now installed, None if a method + other than FETCH_BIN was requested + + Raises: + Valuerror: Fetching could not be completed + """ + if method != bintool.FETCH_BIN: + return None + return self.apt_install('lz4') + + def version(self): + """Version handler + + Returns: + str: Version number of lz4 + """ + out = self.run_cmd('-V').strip() + if not out: + return super().version() + m_version = re.match(r'.* (v[0-9.]*),.*', out) + return m_version.group(1) if m_version else out diff --git a/tools/binman/btool/lzma_alone.py b/tools/binman/btool/lzma_alone.py new file mode 100644 index 00000000000..d7c62dfd2a5 --- /dev/null +++ b/tools/binman/btool/lzma_alone.py @@ -0,0 +1,126 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Bintool implementation for lzma_alone + +lzma_alone allows compression and decompression of files, using an older version +of lzma. + +Documentation is available via:: + + man lzma_alone + +Here is the help: + +LZMA 9.22 beta : Igor Pavlov : Public domain : 2011-04-18 + +Usage: LZMA <e|d> inputFile outputFile [<switches>...] + e: encode file + d: decode file + b: Benchmark +<Switches> + -a{N}: set compression mode - [0, 1], default: 1 (max) + -d{N}: set dictionary size - [12, 30], default: 23 (8MB) + -fb{N}: set number of fast bytes - [5, 273], default: 128 + -mc{N}: set number of cycles for match finder + -lc{N}: set number of literal context bits - [0, 8], default: 3 + -lp{N}: set number of literal pos bits - [0, 4], default: 0 + -pb{N}: set number of pos bits - [0, 4], default: 2 + -mf{MF_ID}: set Match Finder: [bt2, bt3, bt4, hc4], default: bt4 + -mt{N}: set number of CPU threads + -eos: write End Of Stream marker + -si: read data from stdin + -so: write data to stdout +""" + +import re +import tempfile + +from binman import bintool +from patman import tools + +# pylint: disable=C0103 +class Bintoollzma_alone(bintool.Bintool): + """Compression/decompression using the LZMA algorithm + + This bintool supports running `lzma_alone` to compress and decompress data, + as used by binman. + + It is also possible to fetch the tool, which uses `apt` to install it. + + Documentation is available via:: + + man lzma_alone + """ + def __init__(self, name): + super().__init__(name, 'lzma_alone compression') + + def compress(self, indata): + """Compress data with lzma_alone + + Args: + indata (bytes): Data to compress + + Returns: + bytes: Compressed data + """ + with tempfile.NamedTemporaryFile(prefix='comp.tmp', + dir=tools.GetOutputDir()) as inf: + tools.WriteFile(inf.name, indata) + with tempfile.NamedTemporaryFile(prefix='compo.otmp', + dir=tools.GetOutputDir()) as outf: + args = ['e', inf.name, outf.name, '-lc1', '-lp0', '-pb0', '-d8'] + self.run_cmd(*args, binary=True) + return tools.ReadFile(outf.name) + + def decompress(self, indata): + """Decompress data with lzma_alone + + Args: + indata (bytes): Data to decompress + + Returns: + bytes: Decompressed data + """ + with tempfile.NamedTemporaryFile(prefix='decomp.tmp', + dir=tools.GetOutputDir()) as inf: + tools.WriteFile(inf.name, indata) + with tempfile.NamedTemporaryFile(prefix='compo.otmp', + dir=tools.GetOutputDir()) as outf: + args = ['d', inf.name, outf.name] + self.run_cmd(*args, binary=True) + return tools.ReadFile(outf.name, binary=True) + + def fetch(self, method): + """Fetch handler for lzma_alone + + This installs the lzma-alone package using the apt utility. + + Args: + method (FETCH_...): Method to use + + Returns: + True if the file was fetched and now installed, None if a method + other than FETCH_BIN was requested + + Raises: + Valuerror: Fetching could not be completed + """ + if method != bintool.FETCH_BIN: + return None + return self.apt_install('lzma-alone') + + def version(self): + """Version handler + + Returns: + str: Version number of lzma_alone + """ + out = self.run_cmd_result('', raise_on_error=False).stderr.strip() + lines = out.splitlines() + if not lines: + return super().version() + out = lines[0] + # e.g. LZMA 9.22 beta : Igor Pavlov : Public domain : 2011-04-18 + m_version = re.match(r'LZMA ([^:]*).*', out) + return m_version.group(1).strip() if m_version else out diff --git a/tools/binman/btool/mkimage.py b/tools/binman/btool/mkimage.py new file mode 100644 index 00000000000..c85bfe053cf --- /dev/null +++ b/tools/binman/btool/mkimage.py @@ -0,0 +1,80 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Bintool implementation for mkimage""" + +import re + +from binman import bintool + +class Bintoolmkimage(bintool.Bintool): + """Image generation for U-Boot + + This bintool supports running `mkimage` with some basic parameters as + neeed by binman. + + Normally binman uses the mkimage built by U-Boot. But when run outside the + U-Boot build system, binman can use the version installed in your system. + Support is provided for fetching this on Debian-like systems, using apt. + """ + def __init__(self, name): + super().__init__(name, 'Generate image for U-Boot') + + # pylint: disable=R0913 + def run(self, reset_timestamp=False, output_fname=None, external=False, + pad=None, version=False): + """Run mkimage + + Args: + reset_timestamp: True to update the timestamp in the FIT + output_fname: Output filename to write to + external: True to create an 'external' FIT, where the binaries are + located outside the main data structure + pad: Bytes to use for padding the FIT devicetree output. This allows + other things to be easily added later, if required, such as + signatures + version: True to get the mkimage version + """ + args = [] + if external: + args.append('-E') + if pad: + args += ['-p', f'{pad:x}'] + if reset_timestamp: + args.append('-t') + if output_fname: + args += ['-F', output_fname] + if version: + args.append('-V') + return self.run_cmd(*args) + + def fetch(self, method): + """Fetch handler for mkimage + + This installs mkimage using the apt utility. + + Args: + method (FETCH_...): Method to use + + Returns: + True if the file was fetched and now installed, None if a method + other than FETCH_BIN was requested + + Raises: + Valuerror: Fetching could not be completed + """ + if method != bintool.FETCH_BIN: + return None + return self.apt_install('u-boot-tools') + + def version(self): + """Version handler for mkimage + + Returns: + str: Version string for mkimage + """ + out = self.run(version=True).strip() + if not out: + return super().version() + m_version = re.match(r'mkimage version (.*)', out) + return m_version.group(1) if m_version else out diff --git a/tools/binman/cbfs_util.py b/tools/binman/cbfs_util.py index 39973371b93..eea7868b16c 100644 --- a/tools/binman/cbfs_util.py +++ b/tools/binman/cbfs_util.py @@ -20,6 +20,7 @@ import io import struct import sys +from binman import comp_util from binman import elf from patman import command from patman import tools @@ -240,9 +241,9 @@ class CbfsFile(object): """Handle decompressing data if necessary""" indata = self.data if self.compress == COMPRESS_LZ4: - data = tools.Decompress(indata, 'lz4', with_header=False) + data = comp_util.decompress(indata, 'lz4', with_header=False) elif self.compress == COMPRESS_LZMA: - data = tools.Decompress(indata, 'lzma', with_header=False) + data = comp_util.decompress(indata, 'lzma', with_header=False) else: data = indata self.memlen = len(data) @@ -361,9 +362,9 @@ class CbfsFile(object): elif self.ftype == TYPE_RAW: orig_data = data if self.compress == COMPRESS_LZ4: - data = tools.Compress(orig_data, 'lz4', with_header=False) + data = comp_util.compress(orig_data, 'lz4', with_header=False) elif self.compress == COMPRESS_LZMA: - data = tools.Compress(orig_data, 'lzma', with_header=False) + data = comp_util.compress(orig_data, 'lzma', with_header=False) self.memlen = len(orig_data) self.data_len = len(data) attr = struct.pack(ATTR_COMPRESSION_FORMAT, @@ -861,27 +862,3 @@ class CbfsReader(object): val += data[:pos] break return val.decode('utf-8') - - -def cbfstool(fname, *cbfs_args, **kwargs): - """Run cbfstool with provided arguments - - If the tool fails then this function raises an exception and prints out the - output and stderr. - - Args: - fname: Filename of CBFS - *cbfs_args: List of arguments to pass to cbfstool - - Returns: - CommandResult object containing the results - """ - args = ['cbfstool', fname] + list(cbfs_args) - if kwargs.get('base') is not None: - args += ['-b', '%#x' % kwargs['base']] - result = command.RunPipe([args], capture=not VERBOSE, - capture_stderr=not VERBOSE, raise_on_error=False) - if result.return_code: - print(result.stderr, file=sys.stderr) - raise Exception("Failed to run (error %d): '%s'" % - (result.return_code, ' '.join(args))) diff --git a/tools/binman/cbfs_util_test.py b/tools/binman/cbfs_util_test.py index 2c62c8a0f81..494f6145edb 100755 --- a/tools/binman/cbfs_util_test.py +++ b/tools/binman/cbfs_util_test.py @@ -16,8 +16,10 @@ import struct import tempfile import unittest +from binman import bintool from binman import cbfs_util from binman.cbfs_util import CbfsWriter +from binman import comp_util from binman import elf from patman import test_util from patman import tools @@ -45,18 +47,10 @@ class TestCbfs(unittest.TestCase): # compressing files tools.PrepareOutputDir(None) - cls.have_cbfstool = True - try: - tools.Run('which', 'cbfstool') - except: - cls.have_cbfstool = False + cls.cbfstool = bintool.Bintool.create('cbfstool') + cls.have_cbfstool = cls.cbfstool.is_present() - cls.have_lz4 = True - try: - tools.Run('lz4', '--no-frame-crc', '-c', - tools.GetInputFilename('u-boot.bin'), binary=True) - except: - cls.have_lz4 = False + cls.have_lz4 = comp_util.HAVE_LZ4 @classmethod def tearDownClass(cls): @@ -177,19 +171,19 @@ class TestCbfs(unittest.TestCase): if not self.have_cbfstool or not self.have_lz4: return None cbfs_fname = os.path.join(self._indir, 'test.cbfs') - cbfs_util.cbfstool(cbfs_fname, 'create', '-m', arch, '-s', '%#x' % size) + self.cbfstool.create_new(cbfs_fname, size, arch) if base: base = [(1 << 32) - size + b for b in base] - cbfs_util.cbfstool(cbfs_fname, 'add', '-n', 'u-boot', '-t', 'raw', - '-c', compress and compress[0] or 'none', - '-f', tools.GetInputFilename( - compress and 'compress' or 'u-boot.bin'), - base=base[0] if base else None) - cbfs_util.cbfstool(cbfs_fname, 'add', '-n', 'u-boot-dtb', '-t', 'raw', - '-c', compress and compress[1] or 'none', - '-f', tools.GetInputFilename( - compress and 'compress' or 'u-boot.dtb'), - base=base[1] if base else None) + self.cbfstool.add_raw( + cbfs_fname, 'u-boot', + tools.GetInputFilename(compress and 'compress' or 'u-boot.bin'), + compress[0] if compress else None, + base[0] if base else None) + self.cbfstool.add_raw( + cbfs_fname, 'u-boot-dtb', + tools.GetInputFilename(compress and 'compress' or 'u-boot.dtb'), + compress[1] if compress else None, + base[1] if base else None) return cbfs_fname def _compare_expected_cbfs(self, data, cbfstool_fname): @@ -223,18 +217,9 @@ class TestCbfs(unittest.TestCase): """Test failure to run cbfstool""" if not self.have_cbfstool: self.skipTest('No cbfstool available') - try: - # In verbose mode this test fails since stderr is not captured. Fix - # this by turning off verbosity. - old_verbose = cbfs_util.VERBOSE - cbfs_util.VERBOSE = False - with test_util.capture_sys_output() as (_stdout, stderr): - with self.assertRaises(Exception) as e: - cbfs_util.cbfstool('missing-file', 'bad-command') - finally: - cbfs_util.VERBOSE = old_verbose - self.assertIn('Unknown command', stderr.getvalue()) - self.assertIn('Failed to run', str(e.exception)) + with self.assertRaises(ValueError) as exc: + out = self.cbfstool.fail() + self.assertIn('cbfstool missing-file bad-command', str(exc.exception)) def test_cbfs_raw(self): """Test base handling of a Coreboot Filesystem (CBFS)""" @@ -515,10 +500,8 @@ class TestCbfs(unittest.TestCase): # Compare against what cbfstool creates if self.have_cbfstool: cbfs_fname = os.path.join(self._indir, 'test.cbfs') - cbfs_util.cbfstool(cbfs_fname, 'create', '-m', 'x86', '-s', - '%#x' % size) - cbfs_util.cbfstool(cbfs_fname, 'add-stage', '-n', 'u-boot', - '-f', elf_fname) + self.cbfstool.create_new(cbfs_fname, size) + self.cbfstool.add_stage(cbfs_fname, 'u-boot', elf_fname) self._compare_expected_cbfs(data, cbfs_fname) def test_cbfs_raw_compress(self): diff --git a/tools/binman/cmdline.py b/tools/binman/cmdline.py index 6c685954619..0626b850f48 100644 --- a/tools/binman/cmdline.py +++ b/tools/binman/cmdline.py @@ -105,6 +105,8 @@ controlled by a description in the board device tree.''' help='Use fake device tree contents (for testing only)') build_parser.add_argument('--fake-ext-blobs', action='store_true', help='Create fake ext blobs with dummy content (for testing only)') + build_parser.add_argument('--force-missing-bintools', type=str, + help='Comma-separated list of bintools to consider missing (for testing)') build_parser.add_argument('-i', '--image', type=str, action='append', help='Image filename to build (if not specified, build all)') build_parser.add_argument('-I', '--indir', action='append', @@ -128,6 +130,9 @@ controlled by a description in the board device tree.''' help='Update an ELF file with the output dtb: infile,outfile,begin_sym,end_sym') subparsers.add_parser( + 'bintool-docs', help='Write out bintool documentation (see bintool.rst)') + + subparsers.add_parser( 'entry-docs', help='Write out entry documentation (see entries.rst)') list_parser = subparsers.add_parser('ls', help='List files in an image') @@ -167,4 +172,11 @@ controlled by a description in the board device tree.''' test_parser.add_argument('tests', nargs='*', help='Test names to run (omit for all)') + tool_parser = subparsers.add_parser('tool', help='Check bintools') + tool_parser.add_argument('-l', '--list', action='store_true', + help='List all known bintools') + tool_parser.add_argument('-f', '--fetch', action='store_true', + help='fetch a bintool from a known location (or: all/missing)') + tool_parser.add_argument('bintools', type=str, nargs='*') + return parser.parse_args(argv) diff --git a/tools/binman/comp_util.py b/tools/binman/comp_util.py new file mode 100644 index 00000000000..dc76adab352 --- /dev/null +++ b/tools/binman/comp_util.py @@ -0,0 +1,76 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2022 Google LLC +# +"""Utilities to compress and decompress data""" + +import struct +import tempfile + +from binman import bintool +from patman import tools + +LZ4 = bintool.Bintool.create('lz4') +HAVE_LZ4 = LZ4.is_present() + +LZMA_ALONE = bintool.Bintool.create('lzma_alone') +HAVE_LZMA_ALONE = LZMA_ALONE.is_present() + + +def compress(indata, algo, with_header=True): + """Compress some data using a given algorithm + + Note that for lzma this uses an old version of the algorithm, not that + provided by xz. + + This requires 'lz4' and 'lzma_alone' tools. It also requires an output + directory to be previously set up, by calling PrepareOutputDir(). + + Args: + indata (bytes): Input data to compress + algo (str): Algorithm to use ('none', 'lz4' or 'lzma') + + Returns: + bytes: Compressed data + """ + if algo == 'none': + return indata + if algo == 'lz4': + data = LZ4.compress(indata) + # cbfstool uses a very old version of lzma + elif algo == 'lzma': + data = LZMA_ALONE.compress(indata) + else: + raise ValueError("Unknown algorithm '%s'" % algo) + if with_header: + hdr = struct.pack('<I', len(data)) + data = hdr + data + return data + +def decompress(indata, algo, with_header=True): + """Decompress some data using a given algorithm + + Note that for lzma this uses an old version of the algorithm, not that + provided by xz. + + This requires 'lz4' and 'lzma_alone' tools. It also requires an output + directory to be previously set up, by calling PrepareOutputDir(). + + Args: + indata (bytes): Input data to decompress + algo (str): Algorithm to use ('none', 'lz4' or 'lzma') + + Returns: + (bytes) Compressed data + """ + if algo == 'none': + return indata + if with_header: + data_len = struct.unpack('<I', indata[:4])[0] + indata = indata[4:4 + data_len] + if algo == 'lz4': + data = LZ4.decompress(indata) + elif algo == 'lzma': + data = LZMA_ALONE.decompress(indata) + else: + raise ValueError("Unknown algorithm '%s'" % algo) + return data diff --git a/tools/binman/control.py b/tools/binman/control.py index f4c1fd01568..2daad05b804 100644 --- a/tools/binman/control.py +++ b/tools/binman/control.py @@ -14,6 +14,7 @@ import re import sys from patman import tools +from binman import bintool from binman import cbfs_util from binman import elf from patman import command @@ -139,7 +140,7 @@ def WriteEntryDocs(modules, test_missing=None): Args: modules: List of Module objects to get docs for - test_missing: Used for testing only, to force an entry's documeentation + test_missing: Used for testing only, to force an entry's documentation to show as missing even if it is present. Should be set to None in normal use. """ @@ -147,6 +148,18 @@ def WriteEntryDocs(modules, test_missing=None): Entry.WriteDocs(modules, test_missing) +def write_bintool_docs(modules, test_missing=None): + """Write out documentation for all bintools + + Args: + modules: List of Module objects to get docs for + test_missing: Used for testing only, to force an entry's documentation + to show as missing even if it is present. Should be set to None in + normal use. + """ + bintool.Bintool.WriteDocs(modules, test_missing) + + def ListEntries(image_fname, entry_paths): """List the entries in an image @@ -487,6 +500,7 @@ def PrepareImagesAndDtbs(dtb_fname, select_images, update_fdt, use_expanded): # without changing the device-tree size, thus ensuring that our # entry offsets remain the same. for image in images.values(): + image.CollectBintools() image.ExpandEntries() if update_fdt: image.AddMissingProperties(True) @@ -578,11 +592,17 @@ def ProcessImage(image, update_fdt, write_map, get_contents=True, image.CheckFakedBlobs(faked_list) if faked_list: tout.Warning( - "Image '%s:%s' has faked external blobs and is non-functional: %s" % - (image.name, image.image_name, - ' '.join([os.path.basename(e.GetDefaultFilename()) - for e in faked_list]))) - return bool(missing_list) or bool(faked_list) + "Image '%s' has faked external blobs and is non-functional: %s" % + (image.name, ' '.join([os.path.basename(e.GetDefaultFilename()) + for e in faked_list]))) + missing_bintool_list = [] + image.check_missing_bintools(missing_bintool_list) + if missing_bintool_list: + tout.Warning( + "Image '%s' has missing bintools and is non-functional: %s" % + (image.name, ' '.join([os.path.basename(bintool.name) + for bintool in missing_bintool_list]))) + return any([missing_list, faked_list, missing_bintool_list]) def Binman(args): @@ -607,7 +627,7 @@ def Binman(args): from binman.image import Image from binman import state - if args.cmd in ['ls', 'extract', 'replace']: + if args.cmd in ['ls', 'extract', 'replace', 'tool']: try: tout.Init(args.verbosity) tools.PrepareOutputDir(None) @@ -622,6 +642,19 @@ def Binman(args): ReplaceEntries(args.image, args.filename, args.indir, args.paths, do_compress=not args.compressed, allow_resize=not args.fix_size, write_map=args.map) + + if args.cmd == 'tool': + tools.SetToolPaths(args.toolpath) + if args.list: + bintool.Bintool.list_all() + elif args.fetch: + if not args.bintools: + raise ValueError( + "Please specify bintools to fetch or 'all' or 'missing'") + bintool.Bintool.fetch_tools(bintool.FETCH_ANY, + args.bintools) + else: + raise ValueError("Invalid arguments to 'tool' subcommand") except: raise finally: @@ -674,6 +707,9 @@ def Binman(args): # Set the first image to timeout, used in testThreadTimeout() images[list(images.keys())[0]].test_section_timeout = True invalid = False + bintool.Bintool.set_missing_list( + args.force_missing_bintools.split(',') if + args.force_missing_bintools else None) for image in images.values(): invalid |= ProcessImage(image, args.update_fdt, args.map, allow_missing=args.allow_missing, diff --git a/tools/binman/elf_test.py b/tools/binman/elf_test.py index ac69a95b654..f7272584878 100644 --- a/tools/binman/elf_test.py +++ b/tools/binman/elf_test.py @@ -99,17 +99,17 @@ class TestElf(unittest.TestCase): """Test that we can obtain a symbol from the ELF file""" fname = self.ElfTestFile('u_boot_ucode_ptr') syms = elf.GetSymbols(fname, []) - self.assertIn('.ucode', syms) + self.assertIn('_dt_ucode_base_size', syms) def testRegexSymbols(self): """Test that we can obtain from the ELF file by regular expression""" fname = self.ElfTestFile('u_boot_ucode_ptr') syms = elf.GetSymbols(fname, ['ucode']) - self.assertIn('.ucode', syms) + self.assertIn('_dt_ucode_base_size', syms) syms = elf.GetSymbols(fname, ['missing']) - self.assertNotIn('.ucode', syms) + self.assertNotIn('_dt_ucode_base_size', syms) syms = elf.GetSymbols(fname, ['missing', 'ucode']) - self.assertIn('.ucode', syms) + self.assertIn('_dt_ucode_base_size', syms) def testMissingFile(self): """Test that a missing file is detected""" diff --git a/tools/binman/entry.py b/tools/binman/entry.py index bac90bbbcde..08770ec5f0b 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -10,6 +10,8 @@ import os import pathlib import sys +from binman import bintool +from binman import comp_util from dtoc import fdt_util from patman import tools from patman.tools import ToHex, ToHexSize @@ -74,6 +76,8 @@ class Entry(object): allow_fake: Allow creating a dummy fake file if the blob file is not available. This is mainly used for testing. external: True if this entry contains an external binary blob + bintools: Bintools used by this entry (only populated for Image) + missing_bintools: List of missing bintools for this entry """ def __init__(self, section, etype, node, name_prefix=''): # Put this here to allow entry-docs and help to work without libfdt @@ -105,6 +109,8 @@ class Entry(object): self.external = False self.allow_missing = False self.allow_fake = False + self.bintools = {} + self.missing_bintools = [] @staticmethod def FindEntryClass(etype, expanded): @@ -960,7 +966,7 @@ features to produce new behaviours. Args: allow_fake: True if allowed, False if not allowed """ - pass + self.allow_fake = allow_fake def CheckMissing(self, missing_list): """Check if any entries in this section have missing external blobs @@ -1011,6 +1017,24 @@ features to produce new behaviours. """ return self.allow_missing + def record_missing_bintool(self, bintool): + """Record a missing bintool that was needed to produce this entry + + Args: + bintool (Bintool): Bintool that was missing + """ + self.missing_bintools.append(bintool) + + def check_missing_bintools(self, missing_list): + """Check if any entries in this section have missing bintools + + If there are missing bintools, these are added to the list + + Args: + missing_list: List of Bintool objects to be added to + """ + missing_list += self.missing_bintools + def GetHelpTags(self): """Get the tags use for missing-blob help @@ -1031,7 +1055,7 @@ features to produce new behaviours. self.uncomp_data = indata if self.compress != 'none': self.uncomp_size = len(indata) - data = tools.Compress(indata, self.compress) + data = comp_util.compress(indata, self.compress) return data @classmethod @@ -1065,3 +1089,22 @@ features to produce new behaviours. value: Help text """ pass + + def AddBintools(self, tools): + """Add the bintools used by this entry type + + Args: + tools (dict of Bintool): + """ + pass + + @classmethod + def AddBintool(self, tools, name): + """Add a new bintool to the tools used by this etype + + Args: + name: Name of the tool + """ + btool = bintool.Bintool.create(name) + tools[name] = btool + return btool diff --git a/tools/binman/etype/blob_ext_list.py b/tools/binman/etype/blob_ext_list.py index 136ae819946..29c9092dc35 100644 --- a/tools/binman/etype/blob_ext_list.py +++ b/tools/binman/etype/blob_ext_list.py @@ -37,6 +37,7 @@ class Entry_blob_ext_list(Entry_blob): missing = False pathnames = [] for fname in self._filenames: + fname = self.check_fake_fname(fname) pathname = tools.GetInputFilename( fname, self.external and self.section.GetAllowMissing()) # Allow the file to be missing diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index b41187df80a..6e5f020c502 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -134,6 +134,7 @@ class Entry_fit(Entry): self._fdts = fdts.split() self._fit_default_dt = self.GetEntryArgsOrProps([EntryArg('default-dt', str)])[0] + self.mkimage = None def ReadNode(self): self.ReadEntries() @@ -250,13 +251,21 @@ class Entry_fit(Entry): tools.WriteFile(input_fname, data) tools.WriteFile(output_fname, data) - args = [] + args = {} ext_offset = self._fit_props.get('fit,external-offset') if ext_offset is not None: - args += ['-E', '-p', '%x' % fdt_util.fdt32_to_cpu(ext_offset.value)] - tools.Run('mkimage', '-t', '-F', output_fname, *args) + args = { + 'external': True, + 'pad': fdt_util.fdt32_to_cpu(ext_offset.value) + } + if self.mkimage.run(reset_timestamp=True, output_fname=output_fname, + **args) is not None: + self.SetContents(tools.ReadFile(output_fname)) + else: + # Bintool is missing; just use empty data as the output + self.record_missing_bintool(self.mkimage) + self.SetContents(tools.GetBytes(0, 1024)) - self.SetContents(tools.ReadFile(output_fname)) return True def _BuildInput(self, fdt): @@ -295,3 +304,6 @@ class Entry_fit(Entry): def SetAllowMissing(self, allow_missing): for section in self._fit_sections.values(): section.SetAllowMissing(allow_missing) + + def AddBintools(self, tools): + self.mkimage = self.AddBintool(tools, 'mkimage') diff --git a/tools/binman/etype/gbb.py b/tools/binman/etype/gbb.py index 41554eba8f6..ca8af1be424 100644 --- a/tools/binman/etype/gbb.py +++ b/tools/binman/etype/gbb.py @@ -77,20 +77,27 @@ class Entry_gbb(Entry): bmpfv_size = gbb_size - 0x2180 if bmpfv_size < 0: self.Raise('GBB is too small (minimum 0x2180 bytes)') - sizes = [0x100, 0x1000, bmpfv_size, 0x1000] - sizes = ['%#x' % size for size in sizes] keydir = tools.GetInputFilename(self.keydir) - gbb_set_command = [ - 'gbb_utility', '-s', - '--hwid=%s' % self.hardware_id, - '--rootkey=%s/root_key.vbpubk' % keydir, - '--recoverykey=%s/recovery_key.vbpubk' % keydir, - '--flags=%d' % self.gbb_flags, - '--bmpfv=%s' % tools.GetInputFilename(self.bmpblk), - fname] - - tools.Run('futility', 'gbb_utility', '-c', ','.join(sizes), fname) - tools.Run('futility', *gbb_set_command) - - self.SetContents(tools.ReadFile(fname)) + + stdout = self.futility.gbb_create( + fname, [0x100, 0x1000, bmpfv_size, 0x1000]) + if stdout is not None: + stdout = self.futility.gbb_set( + fname, + hwid=self.hardware_id, + rootkey='%s/root_key.vbpubk' % keydir, + recoverykey='%s/recovery_key.vbpubk' % keydir, + flags=self.gbb_flags, + bmpfv=tools.GetInputFilename(self.bmpblk)) + + if stdout is not None: + self.SetContents(tools.ReadFile(fname)) + else: + # Bintool is missing; just use the required amount of zero data + self.record_missing_bintool(self.futility) + self.SetContents(tools.GetBytes(0, gbb_size)) + return True + + def AddBintools(self, tools): + self.futility = self.AddBintool(tools, 'futility') diff --git a/tools/binman/etype/intel_ifwi.py b/tools/binman/etype/intel_ifwi.py index ecbd78df5e5..ed14046ba6e 100644 --- a/tools/binman/etype/intel_ifwi.py +++ b/tools/binman/etype/intel_ifwi.py @@ -59,15 +59,23 @@ class Entry_intel_ifwi(Entry_blob_ext): if self._convert_fit: inname = self._pathname outname = tools.GetOutputFilename('ifwi.bin') - tools.RunIfwiTool(inname, tools.CMD_CREATE, outname) + if self.ifwitool.create_ifwi(inname, outname) is None: + # Bintool is missing; just create a zeroed ifwi.bin + self.record_missing_bintool(self.ifwitool) + self.SetContents(tools.GetBytes(0, 1024)) + self._filename = 'ifwi.bin' self._pathname = outname else: # Provide a different code path here to ensure we have test coverage outname = self._pathname - # Delete OBBP if it is there, then add the required new items. - tools.RunIfwiTool(outname, tools.CMD_DELETE, subpart='OBBP') + # Delete OBBP if it is there, then add the required new items + if self.ifwitool.delete_subpart(outname, 'OBBP') is None: + # Bintool is missing; just use zero data + self.record_missing_bintool(self.ifwitool) + self.SetContents(tools.GetBytes(0, 1024)) + return True for entry in self._ifwi_entries.values(): # First get the input data and put it in a file @@ -76,9 +84,11 @@ class Entry_intel_ifwi(Entry_blob_ext): input_fname = tools.GetOutputFilename('input.%s' % uniq) tools.WriteFile(input_fname, data) - tools.RunIfwiTool(outname, - tools.CMD_REPLACE if entry._ifwi_replace else tools.CMD_ADD, - input_fname, entry._ifwi_subpart, entry._ifwi_entry_name) + # At this point we know that ifwitool is present, so we don't need + # to check for None here + self.ifwitool.add_subpart( + outname, entry._ifwi_subpart, entry._ifwi_entry_name, + input_fname, entry._ifwi_replace) self.ReadBlobContents() return True @@ -132,3 +142,6 @@ class Entry_intel_ifwi(Entry_blob_ext): if not self.missing: for entry in self._ifwi_entries.values(): entry.WriteSymbols(self) + + def AddBintools(self, tools): + self.ifwitool = self.AddBintool(tools, 'ifwitool') diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 190398728ec..201ee4b5696 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -51,8 +51,14 @@ class Entry_mkimage(Entry): input_fname = tools.GetOutputFilename('mkimage.%s' % uniq) tools.WriteFile(input_fname, data) output_fname = tools.GetOutputFilename('mkimage-out.%s' % uniq) - tools.Run('mkimage', '-d', input_fname, *self._args, output_fname) - self.SetContents(tools.ReadFile(output_fname)) + if self.mkimage.run_cmd('-d', input_fname, *self._args, + output_fname) is not None: + self.SetContents(tools.ReadFile(output_fname)) + else: + # Bintool is missing; just use the input data as the output + self.record_missing_bintool(self.mkimage) + self.SetContents(data) + return True def ReadEntries(self): @@ -81,3 +87,6 @@ class Entry_mkimage(Entry): """ for entry in self._mkimage_entries.values(): entry.CheckFakedBlobs(faked_blobs_list) + + def AddBintools(self, tools): + self.mkimage = self.AddBintool(tools, 'mkimage') diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index 7a55d032318..bb375e9063d 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -13,6 +13,7 @@ import concurrent.futures import re import sys +from binman import comp_util from binman.entry import Entry from binman import state from dtoc import fdt_util @@ -775,7 +776,7 @@ class Entry_section(Entry): data = parent_data[offset:offset + child.size] if decomp: indata = data - data = tools.Decompress(indata, child.compress) + data = comp_util.decompress(indata, child.compress) if child.uncomp_size: tout.Info("%s: Decompressing data size %#x with algo '%s' to data size %#x" % (child.GetPath(), len(indata), child.compress, @@ -805,6 +806,7 @@ class Entry_section(Entry): Args: allow_fake_blob: True if allowed, False if not allowed """ + super().SetAllowFakeBlob(allow_fake) for entry in self._entries.values(): entry.SetAllowFakeBlob(allow_fake) @@ -830,6 +832,17 @@ class Entry_section(Entry): for entry in self._entries.values(): entry.CheckFakedBlobs(faked_blobs_list) + def check_missing_bintools(self, missing_list): + """Check if any entries in this section have missing bintools + + If there are missing bintools, these are added to the list + + Args: + missing_list: List of Bintool objects to be added to + """ + for entry in self._entries.values(): + entry.check_missing_bintools(missing_list) + def _CollectEntries(self, entries, entries_by_name, add_entry): """Collect all the entries in an section @@ -879,3 +892,7 @@ class Entry_section(Entry): def CheckAltFormats(self, alt_formats): for entry in self._entries.values(): entry.CheckAltFormats(alt_formats) + + def AddBintools(self, tools): + for entry in self._entries.values(): + entry.AddBintools(tools) diff --git a/tools/binman/etype/vblock.py b/tools/binman/etype/vblock.py index c0a6a28c943..8bbba273ab6 100644 --- a/tools/binman/etype/vblock.py +++ b/tools/binman/etype/vblock.py @@ -38,6 +38,7 @@ class Entry_vblock(Entry_collection): """ def __init__(self, section, etype, node): super().__init__(section, etype, node) + self.futility = None (self.keydir, self.keyblock, self.signprivate, self.version, self.kernelkey, self.preamble_flags) = self.GetEntryArgsOrProps([ EntryArg('keydir', str), @@ -68,19 +69,21 @@ class Entry_vblock(Entry_collection): input_fname = tools.GetOutputFilename('input.%s' % uniq) tools.WriteFile(input_fname, input_data) prefix = self.keydir + '/' - args = [ - 'vbutil_firmware', - '--vblock', output_fname, - '--keyblock', prefix + self.keyblock, - '--signprivate', prefix + self.signprivate, - '--version', '%d' % self.version, - '--fv', input_fname, - '--kernelkey', prefix + self.kernelkey, - '--flags', '%d' % self.preamble_flags, - ] - #out.Notice("Sign '%s' into %s" % (', '.join(self.value), self.label)) - stdout = tools.Run('futility', *args) - return tools.ReadFile(output_fname) + stdout = self.futility.sign_firmware( + vblock=output_fname, + keyblock=prefix + self.keyblock, + signprivate=prefix + self.signprivate, + version=f'{self.version,}', + firmware=input_fname, + kernelkey=prefix + self.kernelkey, + flags=f'{self.preamble_flags}') + if stdout is not None: + data = tools.ReadFile(output_fname) + else: + # Bintool is missing; just use 4KB of zero data + self.record_missing_bintool(self.futility) + data = tools.GetBytes(0, 4096) + return data def ObtainContents(self): data = self.GetVblock(False) @@ -93,3 +96,6 @@ class Entry_vblock(Entry_collection): # The blob may have changed due to WriteSymbols() data = self.GetVblock(True) return self.ProcessContentsUpdate(data) + + def AddBintools(self, tools): + self.futility = self.AddBintool(tools, 'futility') diff --git a/tools/binman/fip_util.py b/tools/binman/fip_util.py index 5f7dbc04d56..868d0b6b16d 100755 --- a/tools/binman/fip_util.py +++ b/tools/binman/fip_util.py @@ -623,31 +623,5 @@ directory''') return 0 -def fiptool(fname, *fip_args): - """Run fiptool with provided arguments - - If the tool fails then this function raises an exception and prints out the - output and stderr. - - Args: - fname (str): Filename of FIP - *fip_args: List of arguments to pass to fiptool - - Returns: - CommandResult: object containing the results - - Raises: - ValueError: the tool failed to run - """ - args = ['fiptool', fname] + list(fip_args) - result = command.RunPipe([args], capture=not VERBOSE, - capture_stderr=not VERBOSE, raise_on_error=False) - if result.return_code: - print(result.stderr, file=sys.stderr) - raise ValueError("Failed to run (error %d): '%s'" % - (result.return_code, ' '.join(args))) - return result - - if __name__ == "__main__": sys.exit(main(sys.argv[1:], OUR_FILE)) # pragma: no cover diff --git a/tools/binman/fip_util_test.py b/tools/binman/fip_util_test.py index 4839b298762..4d2093b3a4f 100755 --- a/tools/binman/fip_util_test.py +++ b/tools/binman/fip_util_test.py @@ -22,13 +22,11 @@ sys.path.insert(2, os.path.join(OUR_PATH, '..')) # pylint: disable=C0413 from patman import test_util from patman import tools -import fip_util +from binman import bintool +from binman import fip_util -HAVE_FIPTOOL = True -try: - tools.Run('which', 'fiptool') -except ValueError: - HAVE_FIPTOOL = False +FIPTOOL = bintool.Bintool.create('fiptool') +HAVE_FIPTOOL = FIPTOOL.is_present() # pylint: disable=R0902,R0904 class TestFip(unittest.TestCase): @@ -286,13 +284,13 @@ blah blah''', binary=False) data = fip.get_data() fname = tools.GetOutputFilename('data.fip') tools.WriteFile(fname, data) - result = fip_util.fiptool('info', fname) + result = FIPTOOL.info(fname) self.assertEqual( '''Firmware Updater NS_BL2U: offset=0xB0, size=0x7, cmdline="--fwu" Trusted Boot Firmware BL2: offset=0xC0, size=0xE, cmdline="--tb-fw" 00010203-0405-0607-0809-0A0B0C0D0E0F: offset=0xD0, size=0xE, cmdline="--blob" ''', - result.stdout) + result) fwu_data = b'my data' tb_fw_data = b'some more data' @@ -315,11 +313,7 @@ Trusted Boot Firmware BL2: offset=0xC0, size=0xE, cmdline="--tb-fw" fname = tools.GetOutputFilename('data.fip') uuid = 'e3b78d9e-4a64-11ec-b45c-fba2b9b49788' - fip_util.fiptool('create', '--align', '8', '--plat-toc-flags', '0x123', - '--fwu', fwu, - '--tb-fw', tb_fw, - '--blob', f'uuid={uuid},file={other_fw}', - fname) + FIPTOOL.create_new(fname, 8, 0x123, fwu, tb_fw, uuid, other_fw) return fip_util.FipReader(tools.ReadFile(fname)) @@ -396,9 +390,8 @@ Trusted Boot Firmware BL2: offset=0xC0, size=0xE, cmdline="--tb-fw" """Check some error reporting from fiptool""" with self.assertRaises(Exception) as err: with test_util.capture_sys_output(): - fip_util.fiptool('create', '--fred') - self.assertIn("Failed to run (error 1): 'fiptool create --fred'", - str(err.exception)) + FIPTOOL.create_bad() + self.assertIn("unrecognized option '--fred'", str(err.exception)) if __name__ == '__main__': diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index f4ff7b65831..ca200ae9f8f 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -17,9 +17,13 @@ import struct import sys import tempfile import unittest +import unittest.mock +import urllib.error +from binman import bintool from binman import cbfs_util from binman import cmdline +from binman import comp_util from binman import control from binman import elf from binman import elf_test @@ -193,13 +197,7 @@ class TestFunctional(unittest.TestCase): TestFunctional._MakeInputFile('env.txt', ENV_DATA) - # Travis-CI may have an old lz4 - cls.have_lz4 = True - try: - tools.Run('lz4', '--no-frame-crc', '-c', - os.path.join(cls._indir, 'u-boot.bin'), binary=True) - except: - cls.have_lz4 = False + cls.have_lz4 = comp_util.HAVE_LZ4 @classmethod def tearDownClass(cls): @@ -312,7 +310,8 @@ class TestFunctional(unittest.TestCase): entry_args=None, images=None, use_real_dtb=False, use_expanded=False, verbosity=None, allow_missing=False, allow_fake_blobs=False, extra_indirs=None, threads=None, - test_section_timeout=False, update_fdt_in_elf=None): + test_section_timeout=False, update_fdt_in_elf=None, + force_missing_bintools=''): """Run binman with a given test file Args: @@ -341,6 +340,8 @@ class TestFunctional(unittest.TestCase): test_section_timeout: True to force the first time to timeout, as used in testThreadTimeout() update_fdt_in_elf: Value to pass with --update-fdt-in-elf=xxx + force_missing_tools (str): comma-separated list of bintools to + regard as missing Returns: int return code, 0 on success @@ -375,6 +376,8 @@ class TestFunctional(unittest.TestCase): args.append('-M') if allow_fake_blobs: args.append('--fake-ext-blobs') + if force_missing_bintools: + args += ['--force-missing-bintools', force_missing_bintools] if update_fdt_in_elf: args += ['--update-fdt-in-elf', update_fdt_in_elf] if images: @@ -1715,6 +1718,18 @@ class TestFunctional(unittest.TestCase): self.assertIn("Node '/binman/gbb': GBB must have a fixed size", str(e.exception)) + def testGbbMissing(self): + """Test that binman still produces an image if futility is missing""" + entry_args = { + 'keydir': 'devkeys', + } + with test_util.capture_sys_output() as (_, stderr): + self._DoTestFile('071_gbb.dts', force_missing_bintools='futility', + entry_args=entry_args) + err = stderr.getvalue() + self.assertRegex(err, + "Image 'main-section'.*missing bintools.*: futility") + def _HandleVblockCommand(self, pipe_list): """Fake calls to the futility utility @@ -1800,6 +1815,19 @@ class TestFunctional(unittest.TestCase): expected_hashval = m.digest() self.assertEqual(expected_hashval, hashval) + def testVblockMissing(self): + """Test that binman still produces an image if futility is missing""" + entry_args = { + 'keydir': 'devkeys', + } + with test_util.capture_sys_output() as (_, stderr): + self._DoTestFile('074_vblock.dts', + force_missing_bintools='futility', + entry_args=entry_args) + err = stderr.getvalue() + self.assertRegex(err, + "Image 'main-section'.*missing bintools.*: futility") + def testTpl(self): """Test that an image with TPL and its device tree can be created""" # ELF file with a '__bss_size' symbol @@ -1923,7 +1951,7 @@ class TestFunctional(unittest.TestCase): self._ResetDtbs() def _decompress(self, data): - return tools.Decompress(data, 'lz4') + return comp_util.decompress(data, 'lz4') def testCompress(self): """Test compression of blobs""" @@ -2311,8 +2339,8 @@ class TestFunctional(unittest.TestCase): # We expect to find the TPL wil in subpart IBBP entry IBBL image_fname = tools.GetOutputFilename('image.bin') tpl_fname = tools.GetOutputFilename('tpl.out') - tools.RunIfwiTool(image_fname, tools.CMD_EXTRACT, fname=tpl_fname, - subpart='IBBP', entry_name='IBBL') + ifwitool = bintool.Bintool.create('ifwitool') + ifwitool.extract(image_fname, 'IBBP', 'IBBL', tpl_fname) tpl_data = tools.ReadFile(tpl_fname) self.assertEqual(U_BOOT_TPL_DATA, tpl_data[:len(U_BOOT_TPL_DATA)]) @@ -2337,6 +2365,16 @@ class TestFunctional(unittest.TestCase): self.assertIn('Could not complete processing of contents', str(e.exception)) + def testIfwiMissing(self): + """Test that binman still produces an image if ifwitool is missing""" + self._SetupIfwi('fitimage.bin') + with test_util.capture_sys_output() as (_, stderr): + self._DoTestFile('111_x86_rom_ifwi.dts', + force_missing_bintools='ifwitool') + err = stderr.getvalue() + self.assertRegex(err, + "Image 'main-section'.*missing bintools.*: ifwitool") + def testCbfsOffset(self): """Test a CBFS with files at particular offsets @@ -2802,7 +2840,7 @@ class TestFunctional(unittest.TestCase): def testExtractCbfsRaw(self): """Test extracting CBFS compressed data without decompressing it""" data = self._RunExtractCmd('section/cbfs/u-boot-dtb', decomp=False) - dtb = tools.Decompress(data, 'lzma', with_header=False) + dtb = comp_util.decompress(data, 'lzma', with_header=False) self.assertEqual(EXTRACT_DTB_SIZE, len(dtb)) def testExtractBadEntry(self): @@ -3616,6 +3654,15 @@ class TestFunctional(unittest.TestCase): # Just check that the data appears in the file somewhere self.assertIn(U_BOOT_SPL_DATA, data) + def testMkimageMissing(self): + """Test that binman still produces an image if mkimage is missing""" + with test_util.capture_sys_output() as (_, stderr): + self._DoTestFile('156_mkimage.dts', + force_missing_bintools='mkimage') + err = stderr.getvalue() + self.assertRegex(err, + "Image 'main-section'.*missing bintools.*: mkimage") + def testExtblob(self): """Test an image with an external blob""" data = self._DoReadFile('157_blob_ext.dts') @@ -3713,11 +3760,37 @@ class TestFunctional(unittest.TestCase): data = self._DoReadFile('162_fit_external.dts') fit_data = data[len(U_BOOT_DATA):-2] # _testing is 2 bytes + # Size of the external-data region as set up by mkimage + external_data_size = len(U_BOOT_DATA) + 2 + expected_size = (len(U_BOOT_DATA) + 0x400 + + tools.Align(external_data_size, 4) + + len(U_BOOT_NODTB_DATA)) + # The data should be outside the FIT dtb = fdt.Fdt.FromData(fit_data) dtb.Scan() fnode = dtb.GetNode('/images/kernel') self.assertNotIn('data', fnode.props) + self.assertEqual(len(U_BOOT_DATA), + fdt_util.fdt32_to_cpu(fnode.props['data-size'].value)) + fit_pos = 0x400; + self.assertEqual( + fit_pos, + fdt_util.fdt32_to_cpu(fnode.props['data-position'].value)) + + self.assertEquals(expected_size, len(data)) + actual_pos = len(U_BOOT_DATA) + fit_pos + self.assertEqual(U_BOOT_DATA + b'aa', + data[actual_pos:actual_pos + external_data_size]) + + def testFitMissing(self): + """Test that binman still produces a FIT image if mkimage is missing""" + with test_util.capture_sys_output() as (_, stderr): + self._DoTestFile('162_fit_external.dts', + force_missing_bintools='mkimage') + err = stderr.getvalue() + self.assertRegex(err, + "Image 'main-section'.*missing bintools.*: mkimage") def testSectionIgnoreHashSignature(self): """Test that sections ignore hash, signature nodes for its data""" @@ -4212,13 +4285,13 @@ class TestFunctional(unittest.TestCase): # Check compressed data section1 = self._decompress(rest) - expect1 = tools.Compress(COMPRESS_DATA + U_BOOT_DATA, 'lz4') + expect1 = comp_util.compress(COMPRESS_DATA + U_BOOT_DATA, 'lz4') self.assertEquals(expect1, rest[:len(expect1)]) self.assertEquals(COMPRESS_DATA + U_BOOT_DATA, section1) rest1 = rest[len(expect1):] section2 = self._decompress(rest1) - expect2 = tools.Compress(COMPRESS_DATA + COMPRESS_DATA, 'lz4') + expect2 = comp_util.compress(COMPRESS_DATA + COMPRESS_DATA, 'lz4') self.assertEquals(expect2, rest1[:len(expect2)]) self.assertEquals(COMPRESS_DATA + COMPRESS_DATA, section2) rest2 = rest1[len(expect2):] @@ -4965,6 +5038,68 @@ fdt fdtmap Extract the devicetree blob from the fdtmap err, "Image '.*' has faked external blobs and is non-functional: .*") + def testExtblobListFaked(self): + """Test an extblob with missing external blob that are faked""" + with test_util.capture_sys_output() as (stdout, stderr): + self._DoTestFile('216_blob_ext_list_missing.dts', + allow_fake_blobs=True) + err = stderr.getvalue() + self.assertRegex(err, "Image 'main-section'.*faked.*: blob-ext-list") + + def testListBintools(self): + args = ['tool', '--list'] + with test_util.capture_sys_output() as (stdout, _): + self._DoBinman(*args) + out = stdout.getvalue().splitlines() + self.assertTrue(len(out) >= 2) + + def testFetchBintools(self): + def fail_download(url): + """Take the tools.Download() function by raising an exception""" + raise urllib.error.URLError('my error') + + args = ['tool'] + with self.assertRaises(ValueError) as e: + self._DoBinman(*args) + self.assertIn("Invalid arguments to 'tool' subcommand", + str(e.exception)) + + args = ['tool', '--fetch'] + with self.assertRaises(ValueError) as e: + self._DoBinman(*args) + self.assertIn('Please specify bintools to fetch', str(e.exception)) + + args = ['tool', '--fetch', '_testing'] + with unittest.mock.patch.object(tools, 'Download', + side_effect=fail_download): + with test_util.capture_sys_output() as (stdout, _): + self._DoBinman(*args) + self.assertIn('failed to fetch with all methods', stdout.getvalue()) + + def testInvalidCompress(self): + with self.assertRaises(ValueError) as e: + comp_util.compress(b'', 'invalid') + self.assertIn("Unknown algorithm 'invalid'", str(e.exception)) + + with self.assertRaises(ValueError) as e: + comp_util.decompress(b'1234', 'invalid') + self.assertIn("Unknown algorithm 'invalid'", str(e.exception)) + + def testBintoolDocs(self): + """Test for creation of bintool documentation""" + with test_util.capture_sys_output() as (stdout, stderr): + control.write_bintool_docs(control.bintool.Bintool.get_tool_list()) + self.assertTrue(len(stdout.getvalue()) > 0) + + def testBintoolDocsMissing(self): + """Test handling of missing bintool documentation""" + with self.assertRaises(ValueError) as e: + with test_util.capture_sys_output() as (stdout, stderr): + control.write_bintool_docs( + control.bintool.Bintool.get_tool_list(), 'mkimage') + self.assertIn('Documentation is missing for modules: mkimage', + str(e.exception)) + if __name__ == "__main__": unittest.main() diff --git a/tools/binman/image.py b/tools/binman/image.py index f0a7d65299e..0f0c1d29e80 100644 --- a/tools/binman/image.py +++ b/tools/binman/image.py @@ -82,6 +82,7 @@ class Image(section.Entry_section): self.missing_etype = missing_etype self.use_expanded = use_expanded self.test_section_timeout = False + self.bintools = {} if not test: self.ReadNode() @@ -394,3 +395,16 @@ class Image(section.Entry_section): self._CollectEntries(entries, entries_by_name, self) return self.LookupSymbol(sym_name, optional, msg, base_addr, entries_by_name) + + def CollectBintools(self): + """Collect all the bintools used by this image + + Returns: + Dict of bintools: + key: name of tool + value: Bintool object + """ + bintools = {} + super().AddBintools(bintools) + self.bintools = bintools + return bintools diff --git a/tools/binman/main.py b/tools/binman/main.py index 35944f314a2..03462e7bb8b 100755 --- a/tools/binman/main.py +++ b/tools/binman/main.py @@ -35,6 +35,7 @@ sys.pycache_prefix = os.path.relpath(our_path, srctree) # in PYTHONPATH) sys.path.insert(2, our1_path) +from binman import bintool from patman import test_util # Bring in the libfdt module @@ -68,6 +69,7 @@ def RunTests(debug, verbosity, processes, test_preserve_dirs, args, toolpath): name to execute (as in 'binman test testSections', for example) toolpath: List of paths to use for tools """ + from binman import bintool_test from binman import cbfs_util_test from binman import elf_test from binman import entry_test @@ -85,9 +87,9 @@ def RunTests(debug, verbosity, processes, test_preserve_dirs, args, toolpath): test_util.RunTestSuites( result, debug, verbosity, test_preserve_dirs, processes, test_name, toolpath, - [entry_test.TestEntry, ftest.TestFunctional, fdt_test.TestFdt, - elf_test.TestElf, image_test.TestImage, cbfs_util_test.TestCbfs, - fip_util_test.TestFip]) + [bintool_test.TestBintool, entry_test.TestEntry, ftest.TestFunctional, + fdt_test.TestFdt, elf_test.TestElf, image_test.TestImage, + cbfs_util_test.TestCbfs, fip_util_test.TestFip]) return test_util.ReportResult('binman', test_name, result) @@ -128,6 +130,9 @@ def RunBinman(args): args.test_preserve_dirs, args.tests, args.toolpath) + elif args.cmd == 'bintool-docs': + control.write_bintool_docs(bintool.Bintool.get_tool_list()) + elif args.cmd == 'entry-docs': control.WriteEntryDocs(control.GetEntryModules()) diff --git a/tools/binman/test/162_fit_external.dts b/tools/binman/test/162_fit_external.dts index 19518e05a58..6f2a629a985 100644 --- a/tools/binman/test/162_fit_external.dts +++ b/tools/binman/test/162_fit_external.dts @@ -10,7 +10,7 @@ u-boot { }; fit { - fit,external-offset = <0>; + fit,external-offset = <0x400>; description = "test-desc"; #address-cells = <1>; diff --git a/tools/binman/test/218_blob_ext_list_fake.dts b/tools/binman/test/218_blob_ext_list_fake.dts new file mode 100644 index 00000000000..54ee54fdaab --- /dev/null +++ b/tools/binman/test/218_blob_ext_list_fake.dts @@ -0,0 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + blob-ext-list { + filenames = "refcode.bin", "fake-file"; + }; + }; +}; diff --git a/tools/buildman/toolchain.py b/tools/buildman/toolchain.py index 4e2471f3e37..adc75a7a0b7 100644 --- a/tools/buildman/toolchain.py +++ b/tools/buildman/toolchain.py @@ -515,50 +515,6 @@ class Toolchains: return arch, links return None - def Download(self, url): - """Download a file to a temporary directory - - Args: - url: URL to download - Returns: - Tuple: - Temporary directory name - Full path to the downloaded archive file in that directory, - or None if there was an error while downloading - """ - print('Downloading: %s' % url) - leaf = url.split('/')[-1] - tmpdir = tempfile.mkdtemp('.buildman') - response = urllib.request.urlopen(url) - fname = os.path.join(tmpdir, leaf) - fd = open(fname, 'wb') - meta = response.info() - size = int(meta.get('Content-Length')) - done = 0 - block_size = 1 << 16 - status = '' - - # Read the file in chunks and show progress as we go - while True: - buffer = response.read(block_size) - if not buffer: - print(chr(8) * (len(status) + 1), '\r', end=' ') - break - - done += len(buffer) - fd.write(buffer) - status = r'%10d MiB [%3d%%]' % (done // 1024 // 1024, - done * 100 // size) - status = status + chr(8) * (len(status) + 1) - print(status, end=' ') - sys.stdout.flush() - fd.close() - if done != size: - print('Error, failed to download') - os.remove(fname) - fname = None - return tmpdir, fname - def Unpack(self, fname, dest): """Unpack a tar file @@ -615,7 +571,7 @@ class Toolchains: os.mkdir(dest) # Download the tar file for this toolchain and unpack it - tmpdir, tarfile = self.Download(url) + tarfile, tmpdir = tools.Download(url, '.buildman') if not tarfile: return 1 print(col.Color(col.GREEN, 'Unpacking to: %s' % dest), end=' ') diff --git a/tools/fdt_host.h b/tools/fdt_host.h index 15c07c7a96e..bc42306c9e5 100644 --- a/tools/fdt_host.h +++ b/tools/fdt_host.h @@ -27,6 +27,14 @@ */ int fdt_remove_unused_strings(const void *old, void *new); +/** + * fit_check_sign() - Check a signature in a FIT + * + * @fit: FIT to check + * @key: Key FDT blob to check against + * @fit_uname_config: Name of configuration to check (NULL for default) + * @return 0 if OK, -ve if signature failed + */ int fit_check_sign(const void *fit, const void *key, const char *fit_uname_config); diff --git a/tools/fit_check_sign.c b/tools/fit_check_sign.c index 5573842d251..3d1d33fdab1 100644 --- a/tools/fit_check_sign.c +++ b/tools/fit_check_sign.c @@ -27,7 +27,7 @@ void usage(char *cmdname) { fprintf(stderr, "Usage: %s -f fit file -k key file -c config name\n" " -f ==> set fit file which should be checked'\n" - " -k ==> set key file which contains the key'\n" + " -k ==> set key .dtb file which contains the key'\n" " -c ==> set the configuration name'\n", cmdname); exit(EXIT_FAILURE); @@ -89,7 +89,7 @@ int main(int argc, char **argv) fprintf(stderr, "Signature check OK\n"); } else { ret = EXIT_FAILURE; - fprintf(stderr, "Signature check Bad (error %d)\n", ret); + fprintf(stderr, "Signature check bad (error %d)\n", ret); } (void) munmap((void *)fit_blob, fsbuf.st_size); diff --git a/tools/fit_common.c b/tools/fit_common.c index 5c8920de547..01649760ac0 100644 --- a/tools/fit_common.c +++ b/tools/fit_common.c @@ -119,3 +119,72 @@ err: return -1; } + +int copyfile(const char *src, const char *dst) +{ + int fd_src = -1, fd_dst = -1; + void *buf = NULL; + ssize_t size; + size_t count; + int ret = -1; + + fd_src = open(src, O_RDONLY); + if (fd_src < 0) { + printf("Can't open file %s (%s)\n", src, strerror(errno)); + goto out; + } + + fd_dst = open(dst, O_WRONLY | O_CREAT | O_TRUNC, 0666); + if (fd_dst < 0) { + printf("Can't open file %s (%s)\n", dst, strerror(errno)); + goto out; + } + + buf = calloc(1, 512); + if (!buf) { + printf("Can't allocate buffer to copy file\n"); + goto out; + } + + while (1) { + size = read(fd_src, buf, 512); + if (size < 0) { + printf("Can't read file %s\n", src); + goto out; + } + if (!size) + break; + + count = size; + size = write(fd_dst, buf, count); + if (size < 0) { + printf("Can't write file %s\n", dst); + goto out; + } + } + + ret = 0; + + out: + if (fd_src >= 0) + close(fd_src); + if (fd_dst >= 0) + close(fd_dst); + if (buf) + free(buf); + + return ret; +} + +void summary_show(struct image_summary *summary, const char *imagefile, + const char *keydest) +{ + if (summary->sig_offset) { + printf("Signature written to '%s', node '%s'\n", imagefile, + summary->sig_path); + if (keydest) { + printf("Public key written to '%s', node '%s'\n", + keydest, summary->keydest_path); + } + } +} diff --git a/tools/fit_common.h b/tools/fit_common.h index 0e8ee79115f..920a16acfdb 100644 --- a/tools/fit_common.h +++ b/tools/fit_common.h @@ -39,4 +39,27 @@ int mmap_fdt(const char *cmdname, const char *fname, size_t size_inc, void **blobp, struct stat *sbuf, bool delete_on_error, bool read_only); +/** + * copyfile() - Copy a file + * + * This uses read()/write() to copy file @src to file @dst + * + * If @dst exists, it is overwritten and truncated to the correct size. + * + * @src: Filename to read from + * @dst: Filename to write to + * @return 0 if OK, -1 on error + */ +int copyfile(const char *src, const char *dst); + +/** + * summary_show() - Show summary information about the signing process + * + * @summary: Summary info to show + * @imagefile: Filename of the output image + * @keydest: Filename where the key information is written (NULL if none) + */ +void summary_show(struct image_summary *summary, const char *imagefile, + const char *keydest); + #endif /* _FIT_COMMON_H_ */ diff --git a/tools/fit_image.c b/tools/fit_image.c index 8df95c40d20..15f7c82d619 100644 --- a/tools/fit_image.c +++ b/tools/fit_image.c @@ -74,7 +74,8 @@ static int fit_add_file_data(struct image_tool_params *params, size_t size_inc, params->require_keys, params->engine_id, params->cmdname, - params->algo_name); + params->algo_name, + ¶ms->summary); } if (dest_blob) { @@ -525,8 +526,9 @@ static int fit_extract_data(struct image_tool_params *params, const char *fname) /* Check if an offset for the external data was set. */ if (params->external_offset > 0) { if (params->external_offset < new_size) { - debug("External offset %x overlaps FIT length %x\n", - params->external_offset, new_size); + fprintf(stderr, + "External offset %x overlaps FIT length %x\n", + params->external_offset, new_size); ret = -EINVAL; goto err; } @@ -655,62 +657,6 @@ err: return ret; } -static int copyfile(const char *src, const char *dst) -{ - int fd_src = -1, fd_dst = -1; - void *buf = NULL; - ssize_t size; - size_t count; - int ret = -1; - - fd_src = open(src, O_RDONLY); - if (fd_src < 0) { - printf("Can't open file %s (%s)\n", src, strerror(errno)); - goto out; - } - - fd_dst = open(dst, O_WRONLY | O_CREAT, 0666); - if (fd_dst < 0) { - printf("Can't open file %s (%s)\n", dst, strerror(errno)); - goto out; - } - - buf = calloc(1, 512); - if (!buf) { - printf("Can't allocate buffer to copy file\n"); - goto out; - } - - while (1) { - size = read(fd_src, buf, 512); - if (size < 0) { - printf("Can't read file %s\n", src); - goto out; - } - if (!size) - break; - - count = size; - size = write(fd_dst, buf, count); - if (size < 0) { - printf("Can't write file %s\n", dst); - goto out; - } - } - - ret = 0; - - out: - if (fd_src >= 0) - close(fd_src); - if (fd_dst >= 0) - close(fd_dst); - if (buf) - free(buf); - - return ret; -} - /** * fit_handle_file - main FIT file processing function * diff --git a/tools/image-host.c b/tools/image-host.c index f86e1fbb1b9..eaeb76545c6 100644 --- a/tools/image-host.c +++ b/tools/image-host.c @@ -48,10 +48,10 @@ static int fit_set_hash_value(void *fit, int noffset, uint8_t *value, * fit_image_process_hash - Process a single subnode of the images/ node * * Check each subnode and process accordingly. For hash nodes we generate - * a hash of the supplised data and store it in the node. + * a hash of the supplied data and store it in the node. * * @fit: pointer to the FIT format image header - * @image_name: name of image being processes (used to display errors) + * @image_name: name of image being processed (used to display errors) * @noffset: subnode offset * @data: data to process * @size: size of data in bytes @@ -200,19 +200,20 @@ static int fit_image_setup_sig(struct image_sign_info *info, * fit_image_process_sig- Process a single subnode of the images/ node * * Check each subnode and process accordingly. For signature nodes we - * generate a signed hash of the supplised data and store it in the node. + * generate a signed hash of the supplied data and store it in the node. * * @keydir: Directory containing keys to use for signing - * @keydest: Destination FDT blob to write public keys into + * @keydest: Destination FDT blob to write public keys into (NULL if none) * @fit: pointer to the FIT format image header - * @image_name: name of image being processes (used to display errors) + * @image_name: name of image being processed (used to display errors) * @noffset: subnode offset * @data: data to process * @size: size of data in bytes * @comment: Comment to add to signature nodes * @require_keys: Mark all keys as 'required' * @engine_id: Engine to use for signing - * Return: 0 if ok, -1 on error + * Return: keydest node if @keydest is non-NULL, else 0 if none; -ve error code + * on failure */ static int fit_image_process_sig(const char *keydir, const char *keyfile, void *keydest, void *fit, const char *image_name, @@ -267,11 +268,13 @@ static int fit_image_process_sig(const char *keydir, const char *keyfile, */ if (keydest) { ret = info.crypto->add_verify_data(&info, keydest); - if (ret) { + if (ret < 0) { printf("Failed to add verification data for '%s' signature node in '%s' image node\n", node_name, image_name); return ret; } + /* Return the node that was written to */ + return ret; } return 0; @@ -649,7 +652,7 @@ int fit_image_add_verification_data(const char *keydir, const char *keyfile, comment, require_keys, engine_id, cmdname, algo_name); } - if (ret) + if (ret < 0) return ret; } @@ -689,14 +692,14 @@ static int strlist_add(struct strlist *list, const char *str) return 0; } -static const char *fit_config_get_image_list(void *fit, int noffset, - int *lenp, int *allow_missingp) +static const char *fit_config_get_image_list(const void *fit, int noffset, + int *lenp, int *allow_missingp) { static const char default_list[] = FIT_KERNEL_PROP "\0" FIT_FDT_PROP; const char *prop; - /* If there is an "image" property, use that */ + /* If there is an "sign-image" property, use that */ prop = fdt_getprop(fit, noffset, "sign-images", lenp); if (prop) { *allow_missingp = 0; @@ -710,10 +713,26 @@ static const char *fit_config_get_image_list(void *fit, int noffset, return default_list; } -static int fit_config_add_hash(void *fit, const char *conf_name, const char *sig_name, - struct strlist *node_inc, const char *iname, int image_noffset) +/** + * fit_config_add_hash() - Add a list of nodes to hash for an image + * + * This adds a list of paths to image nodes (as referred to by a particular + * offset) that need to be hashed, to protect a configuration + * + * @fit: Pointer to the FIT format image header + * @image_noffset: Offset of image to process (e.g. /images/kernel-1) + * @node_inc: List of nodes to add to + * @conf_name Configuration-node name, child of /configurations node (only + * used for error messages) + * @sig_name Signature-node name (only used for error messages) + * @iname: Name of image being processed (e.g. "kernel-1" (only used + * for error messages) + */ +static int fit_config_add_hash(const void *fit, int image_noffset, + struct strlist *node_inc, const char *conf_name, + const char *sig_name, const char *iname) { - char name[200], path[200]; + char path[200]; int noffset; int hash_count; int ret; @@ -724,9 +743,6 @@ static int fit_config_add_hash(void *fit, const char *conf_name, const char *sig if (strlist_add(node_inc, path)) goto err_mem; - snprintf(name, sizeof(name), "%s/%s", FIT_CONFS_PATH, - conf_name); - /* Add all this image's hashes */ hash_count = 0; for (noffset = fdt_first_subnode(fit, image_noffset); @@ -781,7 +797,21 @@ err_path: return -ENOENT; } -static int fit_config_get_hash_list(void *fit, int conf_noffset, +/** + * fit_config_get_hash_list() - Get the regions to sign + * + * This calculates a list of nodes to hash for this particular configuration, + * returning it as a string list (struct strlist, not a devicetree string list) + * + * @fit: Pointer to the FIT format image header + * @conf_noffset: Offset of configuration node to sign (child of + * /configurations node) + * @sig_offset: Offset of signature node containing info about how to sign it + * (child of 'signatures' node) + * @return 0 if OK, -ENOENT if an image referred to by the configuration cannot + * be found, -ENOMSG if ther were no images in the configuration + */ +static int fit_config_get_hash_list(const void *fit, int conf_noffset, int sig_offset, struct strlist *node_inc) { int allow_missing; @@ -832,9 +862,8 @@ static int fit_config_get_hash_list(void *fit, int conf_noffset, return -ENOENT; } - ret = fit_config_add_hash(fit, conf_name, - sig_name, node_inc, - iname, image_noffset); + ret = fit_config_add_hash(fit, image_noffset, node_inc, + conf_name, sig_name, iname); if (ret < 0) return ret; @@ -856,9 +885,32 @@ err_mem: return -ENOMEM; } -static int fit_config_get_data(void *fit, int conf_noffset, int noffset, - struct image_region **regionp, int *region_countp, - char **region_propp, int *region_proplen) +/** + * fit_config_get_regions() - Get the regions to sign + * + * This calculates a list of node to hash for this particular configuration, + * then finds which regions of the devicetree they correspond to. + * + * @fit: Pointer to the FIT format image header + * @conf_noffset: Offset of configuration node to sign (child of + * /configurations node) + * @sig_offset: Offset of signature node containing info about how to sign it + * (child of 'signatures' node) + * @regionp: Returns list of regions that need to be hashed (allocated; must be + * freed by the caller) + * @region_count: Returns number of regions + * @region_propp: Returns string-list property containing the list of nodes + * that correspond to the regions. Each entry is a full path to the node. + * This is in devicetree format, i.e. a \0 between each string. This is + * allocated and must be freed by the caller. + * @region_proplen: Returns length of *@@region_propp in bytes + * @return 0 if OK, -ENOMEM if out of memory, -EIO if the regions to hash could + * not be found, -EINVAL if no registers were found to hash + */ +static int fit_config_get_regions(const void *fit, int conf_noffset, + int sig_offset, struct image_region **regionp, + int *region_countp, char **region_propp, + int *region_proplen) { char * const exc_prop[] = {"data"}; struct strlist node_inc; @@ -871,11 +923,12 @@ static int fit_config_get_data(void *fit, int conf_noffset, int noffset, int ret, len; conf_name = fit_get_name(fit, conf_noffset, NULL); - sig_name = fit_get_name(fit, noffset, NULL); + sig_name = fit_get_name(fit, sig_offset, NULL); debug("%s: conf='%s', sig='%s'\n", __func__, conf_name, sig_name); /* Get a list of nodes we want to hash */ - ret = fit_config_get_hash_list(fit, conf_noffset, noffset, &node_inc); + ret = fit_config_get_hash_list(fit, conf_noffset, sig_offset, + &node_inc); if (ret) return ret; @@ -928,8 +981,26 @@ static int fit_config_get_data(void *fit, int conf_noffset, int noffset, return 0; } +/** + * fit_config_process_sig - Process a single subnode of the configurations/ node + * + * Generate a signed hash of the supplied data and store it in the node. + * + * @keydir: Directory containing keys to use for signing + * @keydest: Destination FDT blob to write public keys into (NULL if none) + * @fit: pointer to the FIT format image header + * @conf_name name of config being processed (used to display errors) + * @conf_noffset: Offset of configuration node, e.g. '/configurations/conf-1' + * @noffset: subnode offset, e.g. '/configurations/conf-1/sig-1' + * @comment: Comment to add to signature nodes + * @require_keys: Mark all keys as 'required' + * @engine_id: Engine to use for signing + * @cmdname: Command name used when reporting errors + * @return keydest node if @keydest is non-NULL, else 0 if none; -ve error code + * on failure + */ static int fit_config_process_sig(const char *keydir, const char *keyfile, - void *keydest, void *fit, const char *conf_name, + void *keydest, void *fit, const char *conf_name, int conf_noffset, int noffset, const char *comment, int require_keys, const char *engine_id, const char *cmdname, const char *algo_name) @@ -945,8 +1016,9 @@ static int fit_config_process_sig(const char *keydir, const char *keyfile, int ret; node_name = fit_get_name(fit, noffset, NULL); - if (fit_config_get_data(fit, conf_noffset, noffset, ®ion, - ®ion_count, ®ion_prop, ®ion_proplen)) + if (fit_config_get_regions(fit, conf_noffset, noffset, ®ion, + ®ion_count, ®ion_prop, + ®ion_proplen)) return -1; if (fit_image_setup_sig(&info, keydir, keyfile, fit, conf_name, noffset, @@ -986,7 +1058,7 @@ static int fit_config_process_sig(const char *keydir, const char *keyfile, /* Write the public key into the supplied FDT file */ if (keydest) { ret = info.crypto->add_verify_data(&info, keydest); - if (ret) { + if (ret < 0) { printf("Failed to add verification data for '%s' signature node in '%s' configuration node\n", node_name, conf_name); } @@ -999,7 +1071,8 @@ static int fit_config_process_sig(const char *keydir, const char *keyfile, static int fit_config_add_verification_data(const char *keydir, const char *keyfile, void *keydest, void *fit, int conf_noffset, const char *comment, int require_keys, const char *engine_id, - const char *cmdname, const char *algo_name) + const char *cmdname, const char *algo_name, + struct image_summary *summary) { const char *conf_name; int noffset; @@ -1019,9 +1092,20 @@ static int fit_config_add_verification_data(const char *keydir, ret = fit_config_process_sig(keydir, keyfile, keydest, fit, conf_name, conf_noffset, noffset, comment, require_keys, engine_id, cmdname, algo_name); + if (ret < 0) + return ret; + + summary->sig_offset = noffset; + fdt_get_path(fit, noffset, summary->sig_path, + sizeof(summary->sig_path)); + + if (keydest) { + summary->keydest_offset = ret; + fdt_get_path(keydest, ret, + summary->keydest_path, + sizeof(summary->keydest_path)); + } } - if (ret) - return ret; } return 0; @@ -1065,7 +1149,8 @@ int fit_cipher_data(const char *keydir, void *keydest, void *fit, int fit_add_verification_data(const char *keydir, const char *keyfile, void *keydest, void *fit, const char *comment, int require_keys, const char *engine_id, - const char *cmdname, const char *algo_name) + const char *cmdname, const char *algo_name, + struct image_summary *summary) { int images_noffset, confs_noffset; int noffset; @@ -1114,7 +1199,7 @@ int fit_add_verification_data(const char *keydir, const char *keyfile, fit, noffset, comment, require_keys, engine_id, cmdname, - algo_name); + algo_name, summary); if (ret) return ret; } diff --git a/tools/imagetool.h b/tools/imagetool.h index b7ac3a23d0f..413e97cbeb2 100644 --- a/tools/imagetool.h +++ b/tools/imagetool.h @@ -21,6 +21,8 @@ #include <unistd.h> #include <u-boot/sha1.h> +#include <image.h> + #include "fdt_host.h" #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) @@ -84,6 +86,7 @@ struct image_tool_params { int bl_len; /* Block length in byte for external data */ const char *engine_id; /* Engine to use for signing */ bool reset_timestamp; /* Reset the timestamp on an existing image */ + struct image_summary summary; /* results of signing process */ }; /* diff --git a/tools/mkimage.c b/tools/mkimage.c index 0ec28da33cb..c8f4ecd473d 100644 --- a/tools/mkimage.c +++ b/tools/mkimage.c @@ -10,6 +10,7 @@ #include "imagetool.h" #include "mkimage.h" #include "imximage.h" +#include <fit_common.h> #include <image.h> #include <version.h> #ifdef __linux__ @@ -472,6 +473,9 @@ int main(int argc, char **argv) (void) munmap((void *)ptr, sbuf.st_size); (void) close (ifd); + if (!retval) + summary_show(¶ms.summary, params.imagefile, + params.keydest); exit (retval); } diff --git a/tools/patman/tools.py b/tools/patman/tools.py index 86c4f616206..5dfecaf917b 100644 --- a/tools/patman/tools.py +++ b/tools/patman/tools.py @@ -7,9 +7,9 @@ import glob import os import shlex import shutil -import struct import sys import tempfile +import urllib.request from patman import command from patman import tout @@ -313,7 +313,22 @@ def GetTargetCompileTool(name, cross_compile=None): target_name = name return target_name, extra_args -def Run(name, *args, **kwargs): +def get_env_with_path(): + """Get an updated environment with the PATH variable set correctly + + If there are any search paths set, these need to come first in the PATH so + that these override any other version of the tools. + + Returns: + dict: New environment with PATH updated, or None if there are not search + paths + """ + if tool_search_paths: + env = dict(os.environ) + env['PATH'] = ':'.join(tool_search_paths) + ':' + env['PATH'] + return env + +def run_result(name, *args, **kwargs): """Run a tool with some arguments This runs a 'tool', which is a program used by binman to process files and @@ -326,6 +341,7 @@ def Run(name, *args, **kwargs): for_host: True to resolve the command to the version for the host for_target: False to run the command as-is, without resolving it to the version for the compile target + raise_on_error: Raise an error if the command fails (True by default) Returns: CommandResult object @@ -334,10 +350,8 @@ def Run(name, *args, **kwargs): binary = kwargs.get('binary') for_host = kwargs.get('for_host', False) for_target = kwargs.get('for_target', not for_host) - env = None - if tool_search_paths: - env = dict(os.environ) - env['PATH'] = ':'.join(tool_search_paths) + ':' + env['PATH'] + raise_on_error = kwargs.get('raise_on_error', True) + env = get_env_with_path() if for_target: name, extra_args = GetTargetCompileTool(name) args = tuple(extra_args) + args @@ -349,11 +363,12 @@ def Run(name, *args, **kwargs): result = command.RunPipe([all_args], capture=True, capture_stderr=True, env=env, raise_on_error=False, binary=binary) if result.return_code: - raise ValueError("Error %d running '%s': %s" % - (result.return_code,' '.join(all_args), - result.stderr)) - return result.stdout - except: + if raise_on_error: + raise ValueError("Error %d running '%s': %s" % + (result.return_code,' '.join(all_args), + result.stderr or result.stdout)) + return result + except ValueError: if env and not PathHasFile(env['PATH'], name): msg = "Please install tool '%s'" % name package = packages.get(name) @@ -362,6 +377,50 @@ def Run(name, *args, **kwargs): raise ValueError(msg) raise +def tool_find(name): + """Search the current path for a tool + + This uses both PATH and any value from SetToolPaths() to search for a tool + + Args: + name (str): Name of tool to locate + + Returns: + str: Full path to tool if found, else None + """ + name = os.path.expanduser(name) # Expand paths containing ~ + paths = [] + pathvar = os.environ.get('PATH') + if pathvar: + paths = pathvar.split(':') + if tool_search_paths: + paths += tool_search_paths + for path in paths: + fname = os.path.join(path, name) + if os.path.isfile(fname) and os.access(fname, os.X_OK): + return fname + +def Run(name, *args, **kwargs): + """Run a tool with some arguments + + This runs a 'tool', which is a program used by binman to process files and + perhaps produce some output. Tools can be located on the PATH or in a + search path. + + Args: + name: Command name to run + args: Arguments to the tool + for_host: True to resolve the command to the version for the host + for_target: False to run the command as-is, without resolving it + to the version for the compile target + + Returns: + CommandResult object + """ + result = run_result(name, *args, **kwargs) + if result is not None: + return result.stdout + def Filename(fname): """Resolve a file path to an absolute path. @@ -458,115 +517,6 @@ def ToString(bval): """ return bval.decode('utf-8') -def Compress(indata, algo, with_header=True): - """Compress some data using a given algorithm - - Note that for lzma this uses an old version of the algorithm, not that - provided by xz. - - This requires 'lz4' and 'lzma_alone' tools. It also requires an output - directory to be previously set up, by calling PrepareOutputDir(). - - Care is taken to use unique temporary files so that this function can be - called from multiple threads. - - Args: - indata: Input data to compress - algo: Algorithm to use ('none', 'gzip', 'lz4' or 'lzma') - - Returns: - Compressed data - """ - if algo == 'none': - return indata - fname = tempfile.NamedTemporaryFile(prefix='%s.comp.tmp' % algo, - dir=outdir).name - WriteFile(fname, indata) - if algo == 'lz4': - data = Run('lz4', '--no-frame-crc', '-B4', '-5', '-c', fname, - binary=True) - # cbfstool uses a very old version of lzma - elif algo == 'lzma': - outfname = tempfile.NamedTemporaryFile(prefix='%s.comp.otmp' % algo, - dir=outdir).name - Run('lzma_alone', 'e', fname, outfname, '-lc1', '-lp0', '-pb0', '-d8') - data = ReadFile(outfname) - elif algo == 'gzip': - data = Run('gzip', '-c', fname, binary=True) - else: - raise ValueError("Unknown algorithm '%s'" % algo) - if with_header: - hdr = struct.pack('<I', len(data)) - data = hdr + data - return data - -def Decompress(indata, algo, with_header=True): - """Decompress some data using a given algorithm - - Note that for lzma this uses an old version of the algorithm, not that - provided by xz. - - This requires 'lz4' and 'lzma_alone' tools. It also requires an output - directory to be previously set up, by calling PrepareOutputDir(). - - Args: - indata: Input data to decompress - algo: Algorithm to use ('none', 'gzip', 'lz4' or 'lzma') - - Returns: - Compressed data - """ - if algo == 'none': - return indata - if with_header: - data_len = struct.unpack('<I', indata[:4])[0] - indata = indata[4:4 + data_len] - fname = GetOutputFilename('%s.decomp.tmp' % algo) - with open(fname, 'wb') as fd: - fd.write(indata) - if algo == 'lz4': - data = Run('lz4', '-dc', fname, binary=True) - elif algo == 'lzma': - outfname = GetOutputFilename('%s.decomp.otmp' % algo) - Run('lzma_alone', 'd', fname, outfname) - data = ReadFile(outfname, binary=True) - elif algo == 'gzip': - data = Run('gzip', '-cd', fname, binary=True) - else: - raise ValueError("Unknown algorithm '%s'" % algo) - return data - -CMD_CREATE, CMD_DELETE, CMD_ADD, CMD_REPLACE, CMD_EXTRACT = range(5) - -IFWITOOL_CMDS = { - CMD_CREATE: 'create', - CMD_DELETE: 'delete', - CMD_ADD: 'add', - CMD_REPLACE: 'replace', - CMD_EXTRACT: 'extract', - } - -def RunIfwiTool(ifwi_file, cmd, fname=None, subpart=None, entry_name=None): - """Run ifwitool with the given arguments: - - Args: - ifwi_file: IFWI file to operation on - cmd: Command to execute (CMD_...) - fname: Filename of file to add/replace/extract/create (None for - CMD_DELETE) - subpart: Name of sub-partition to operation on (None for CMD_CREATE) - entry_name: Name of directory entry to operate on, or None if none - """ - args = ['ifwitool', ifwi_file] - args.append(IFWITOOL_CMDS[cmd]) - if fname: - args += ['-f', fname] - if subpart: - args += ['-n', subpart] - if entry_name: - args += ['-d', '-e', entry_name] - Run(*args) - def ToHex(val): """Convert an integer value (or None) to a string @@ -596,3 +546,51 @@ def PrintFullHelp(fname): if not pager: pager = ['more'] command.Run(*pager, fname) + +def Download(url, tmpdir_pattern='.patman'): + """Download a file to a temporary directory + + Args: + url (str): URL to download + tmpdir_pattern (str): pattern to use for the temporary directory + + Returns: + Tuple: + Full path to the downloaded archive file in that directory, + or None if there was an error while downloading + Temporary directory name + """ + print('- downloading: %s' % url) + leaf = url.split('/')[-1] + tmpdir = tempfile.mkdtemp(tmpdir_pattern) + response = urllib.request.urlopen(url) + fname = os.path.join(tmpdir, leaf) + fd = open(fname, 'wb') + meta = response.info() + size = int(meta.get('Content-Length')) + done = 0 + block_size = 1 << 16 + status = '' + + # Read the file in chunks and show progress as we go + while True: + buffer = response.read(block_size) + if not buffer: + print(chr(8) * (len(status) + 1), '\r', end=' ') + break + + done += len(buffer) + fd.write(buffer) + status = r'%10d MiB [%3d%%]' % (done // 1024 // 1024, + done * 100 // size) + status = status + chr(8) * (len(status) + 1) + print(status, end=' ') + sys.stdout.flush() + print('\r', end='') + sys.stdout.flush() + fd.close() + if done != size: + print('Error, failed to download') + os.remove(fname) + fname = None + return fname, tmpdir |
