Looking forward to GCC6 – Many new warnings

Like every new GCC release, GCC6 introduces a lot of new useful warnings. My favorite is still -Wmisleading-indentation. But there are many more that have found various bugs. Not all of them are enabled by default, but it makes sense to enable as many as possible when writing new code.

Duplicate logic

In GCC6 -Wlogical-op (must be enabled explicitly) now also warns when the operands of a logical operator are the same. For example the following typo is detected:

points.c: In function 'distance':
points.c:10:19: warning: logical 'or' of equal expressions [-Wlogical-op]
   if (point.x < 0 || point.x < 0)
                   ^~

Similar logic for detection duplicate conditions in an if-else-if chain has been added with -Wduplicated-cond. It must be enabled explicitly, which I would highly recommend because it found some real bugs like:

elflint.c: In function 'compare_hash_gnu_hash':
elflint.c:2483:34: error: duplicated 'if' condition [-Werror=duplicated-cond]
  else if (hash_shdr->sh_entsize == sizeof (Elf64_Word))
           ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~

elflint.c:2448:29: note: previously used here
  if (hash_shdr->sh_entsize == sizeof (Elf32_Word))
      ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~

GCC is correct, a Word in both an Elf32 and Elf64 file is 4 bytes. We meant to check for sizeof (Elf64_Xword) which is 8 bytes.

And with -Wtautological-compare (enabled by -Wall) GCC6 will also detect comparisons of variables against themselves which will always be true or false. Like in the case where we made a typo:

result.c: In function 'check_fast':
result.c:14:14: warning: self-comparison always evaluates to false [-Wtautological-compare]
  while (res > res)
             ^

Finally -Wswitch-bool (enabled by default) has been improved to only warn about switch statements on a boolean type if any of the case statements is outside the range of the boolean type.

Bit shifting

GCC5 already had warnings for -Wshift-count-negative and -Wshift-count-overflow. Both are enabled by default.

value.c: In function 'calculate':
value.c:7:9: warning: left shift count is negative [-Wshift-count-negative]
  b = a << -3;
        ^~
value.c:8:9: warning: right shift count >= width of type [-Wshift-count-overflow]
  b = a >> 63;
        ^~

GCC6 adds -Wshift-negative-value (enabled by -Wextra) which warns about left shifting a negative value. Such shifts are undefined because they depend on the representation of negative values.

value.c:9:10: warning: left shift of negative value [-Wshift-negative-value]
  b = -1 << 5;
         ^~

Also added in GCC6 is -Wshift-overflow (enabled by default) to detect left shift overflow.

value.c:10:11: warning: result of '10 << 30' requires 35 bits to represent, but 'int' only has 32 bits [-Wshift-overflow=]
 b = 0xa << (14 + 16);
         ^~

You can increase the warnings given with -Wshift-overflow=2 (not enabled by default) which makes GCC also warn if the compiler can detect you are shifting a signed value that would change the sign bit.

value.c:11:10: warning: result of '1 << 31' requires 33 bits to represent, but 'int' only has 32 bits [-Wshift-overflow=]
  b |= 1 << 31;
         ^~				

NULL

The new -Wnull-dereference (must be enabled explicitly) warns when GCC detects you (might) dereference a null pointer that will cause erroneous or undefined behavior (higher optimization levels might catch more cases).

dereference.c: In function 'test2':
dereference.c:30:21: error: null pointer dereference [-Werror=null-dereference]
  if (s == NULL && s->bar > 2)
                   ~^~~~~

-Wnonnull (enabled by -Wall) already warned for passing a null pointer for an argument marked with the nonnull attribute. In GCC6 it has been extended to also warn for comparing an argument marked with nonnull against NULL inside a function.

nonnull.c: In function 'foo':
nonnull.c:8:7: error: nonnull argument 'bar' compared to NULL [-Werror=nonnull]
  if (!bar)
      ^

C++

-Wterminate (enabled by default) warns when a throw will immediate result in a call to terminate like in an noexcept function. In particular it will warn when something is thrown from a C++11 destructor since they default to noexcept, unlike in C++98 (GCC6 defaults to -std=gnu++14).

collect.cxx: In destructor 'area_container::~area_container()':
collect.cxx:23:50: warning: throw will always call terminate() [-Wterminate]
    throw sanity_error ("disposed while negative");
                                                 ^
collect.cxx:23:50: note: in C++11 destructors default to noexcept

The help with some ODR issues GCC6 has -Wlto-type-mismatch and -Wsubobject-linkage.

C++ allows “placement new” of objects at a specified memory location. You are responsible for making sure the memory location provided is of the correct size. This might result in A New Class of Buffer Overflow Attacks. When GCC6 detects the provided buffer is too small it will issue a warning with -Wplacement-new (enabled by default).

placement.C: In function 'S* f(S*)':
placement.C:9:27: warning: placement new constructing an object of type 'S' and size '16' in a region of type 'char [8]' and size '8' [-Wplacement-new=]
     S *t = new (buf) S (*s);
                           ^

And if you actually want less C++ then GCC6 will give you -Wtemplates, -Wmultiple-inheritance, -Wvirtual-inheritance and -Wnamespaces to help enforce coding styles that don’t like those C++ features.

Unused side effects

In GCC6 -Woverride-init-side-effects (enabled by default) is its own warning when you use Designated Initializers multiple times with side effects. If the same field, or array element, is initialized multiple times, it has the value from the last initialization. But if any such overridden initializations has side-effects, it is unspecified whether the side-effect happens or not. So you’ll get a warning for such overrides:

side.c: In function 'foo':
side.c:18:68: warning: initialized field with side-effects overwritten [-Woverride-init-side-effects]
struct Secrets s = { .alpha = count++, .beta = count++, .alpha = count++ };
                                                                 ^~~~~
side.c:18:68: note: (near initialization for 's.alpha')

Before GCC6 -Wunused-variable (enabled by -Wall) didn’t warn for unused static const variables in C. This was because some old code had constructs like: static const char rcs_id[] = "$Id:...";. But this old special use case is not very common anymore. And not warning about such unused variables was hiding real bugs. So GCC6 introduces -Wunused-const-variable (enabled by -Wunused-variable for C, but not for C++). There is still some debate on how to fine tune this warning. So please comment if you find some time to experiment with it before GCC6 is officially released.

framed

Calling __builtin_frame_address or __builtin_return_address with a level other than zero (the current function) is almost always a mistake (it cannot be guaranteed to return a valid value and might even crash the program). So GCC6 now has -Wframe-address (enabled by -Wall) to warn about any such usage.

Where are your symbols, debuginfo and sources?

A package is more than a binary – make it observable

Introduction

I gave a presentation at Fosdem 2016 in the distributions developer room. This article is an extended version of the slide presenter notes. You can get the original from the talk page (press ‘h’ for help and ‘p’ to get the presenter view for the slides).

If any of this sounds interesting and you would like to get help implementing some of this for your distribution please contact me. I work upstream on valgrind and elfutils, which take advantage of having symbols and debuginfo available for programs. And elfutils is used by systemtap, systemd and perf to use some of that information (if available). I am also happy to work on gdb, gcc, rpm, binutils, etc. if that would help make some of this possible or more usable. I work for Red Hat which might explain some of my bias towards how Fedora handles some of this. Please do point out when I am making assumptions that are just plain wrong for other distributions. Ideally all this will be usable cross-distros and when running programs in VMs or containers based on different distro images, you’ll simply reach in and trace, profile and debug anything running seamlessly.

Goal

The main goal is to seamlessly go from a binary back to the original source. In this case limited to “native” ELF code (stuff you build with GCC or any other compiler that produces native code and sufficiently good debuginfo). Once we get this right for “native code” we can look at how to setup similar conventions for other language execution environments.

Whether you are tracing, profiling or debugging a locally running program, get a core file or want to interpret trace or profile data captured on some system, we want to make sure as many symbols, debuginfo and sources are available and easily accessible. Get them wherever they are. Or have a standard way to get them.

I know how most of this stuff works in Fedora, and how to improve some things for that distribution. But I would need help with other distributions and sanity checking that these ideas make sense in other context.

Why?

If you are running Free Software then you should be able to get back at the source code of your binaries. Code is never perfect and real issues always happen in production. Every user really is (and should allowed to be) a “debugger”. Observing (tracing, profiling) their system as it is running. Since actual running (optimized) code in a specific setup really is different from development code. You will observe different behavior in an actual deployed binary compared to how it behaved on the packager or developer setup.

And this isn’t just for the user on the machine getting useful backtraces. The user might just capture a trace or profile on their machine. Or you might get a core file that needs analysis “off-line”. Then having everything ready beforehand makes recreating a “debug environment” that matches the “production environment” precisely so much easier.

Meta observation

We do want users to trace, profile and debug processes running on their systems so they know precisely what is going on with their machine. But we also care about security so all code should run with the minimal privileges possible. Different users shouldn’t be able to trace each other processes, services should run in separate security context, processes handling sensitive data should make sure to pin security sensitive memory to prevent dumping such data to disk and processes that aren’t supposed to use introspection syscalls should be sandboxed. That is all good stuff. It makes sure users/services can synchronize, signal, debug, trace and profile their own processes, but not more than that.

There are however some kernel tweaks that don’t obey process separation and don’t respect different security scopes. Like setting selinux/yama ptrace_deny/scope. Enabling those will break stuff and will cause use of more privileged code than necessary. These “deny ptrace” features aren’t just about blocking the ptrace system call. They don’t just block “debuggers”. They block all inter-process synchronization, signaling, tracing and profiling by normal (unprivileged) users. Both settings were tried in Fedora and were disabled by default in the end. Because with them users can no longer observe their own processes. So they will have to raise their privileges to root. It also means a privileged monitoring process cannot just drop privileges to trace or profile lesser privileged code. So you’ll have to debug, profile and trace as root! It can also be seen as a form of security theater since a compromised process that is running in the same user/security context, might not be able to easily observe another process directly, but it can still get at the same inputs, read and manipulate the other process files, settings, install preload code to disable any restrictions, etc. Making observing other processes much more cumbersome, but not impossible.

So please don’t use these system breaking tweaks on normal setups where users and administrators should be able to monitor their own processes. We need real solutions that don’t require running everything as root and that respects normal user privileges and security contexts.

build-ids

A build-id is a globally unique identifier for an executable ELF image. Luckily everybody gets this right now (all support is upstream and enabled by default in the GNU toolchain). An build-id is an (allocated) ELF note put into the binary by the linker. It is (normally) the SHA1 hash over all code sections in the ELF image. The build-id can be found in each executable, shared library, kernel, module, etc. It is loaded into memory and automatically dumped into core files.

When you know the build-ids and the addresses where the ELF images are/were loaded then you have enough information to match any address to original source.

If your build is reproducible then the build-id will also be exactly the same. The build-id identifies the executable code. So stripping symbols or adding debuginfo doesn’t change it. And in theory with reproducible builds you could “just” rebuild your binaries with all debug options turned on (GCC guarantees that producing debug output will not change the executable code generated) and not strip out any symbols. But that is not really practical and a bit cumbersome (you will need to also keep around the exact build environment for any binary on the system).

Because they are so useful and so essential it really makes sense to make it an error when no build-id is found in an executable or shared library, not just warn about it when creating a package.

backtraces/unwind tables

Backtraces are the backbone of everything (tracing, profiling, debugging). They provide the necessary context for any observation. If you have any observability this should be it. To make it possible to get accurate and precise backtraces in any context always use gcc -fasynchronous-unwind-tables everywhere. It is already the default on the most widely used architectures, but you should enable it on all architectures you support. Fedora already does this (either through making it the default in gcc or by passing it explicitly in the build flags used by rpmbuild).

This will get you unwind tables which are put into .eh_frame sections, which are always kept with the binary and loaded in memory and so can be accessed easily and fast. frame pointers only get you so far. It is always the tricky code, signal handlers, fork/exec/clone, unexpected termination in the prologue/epilogue that manipulates the frame pointer. And it is often this tricky situations where you want accurate backtraces the most and you get bad results when only trying to rely on frame pointers. Maintaining frame pointers bloats code and reduces optimization oppertunities. GCC is really good at automatically generating it for any higher level language. And glibc now has CFI for all/most hand written assembler.

The only exception might be the kernel (for reasons mainly to do with the fact that linux kernel modules are ET_REL files, loadable .eh_frame sections are somewhat problematic). But even for the kernel please do generate accurate unwind tables and then put those in the .debug_frame section (which can then be stripped out and put into a separate debug file later). You can do this with the .cfi_sections assembler directive.

function symbols

When you do get backtraces for observations it would be really nice to immediately be able to match any addresses to the function names from the original source code. But normally only the .dynsym symbols are available (these are only those symbols that are necessary for dynamic linking your application and shared libraries). The full .symtab is normally stripped away since it is strictly only necessary for the linker combining object files.

Because .dynsym provides too little symbols and .symtab provides too much symbols, Fedora introduced the mini-symtab (sometimes called mini-debuginfo). This is a special (non-loaded) .gnu_debugdata section that contains a xz compressed ELF image. This ELF image contains minimal .symtab + .strtab sections for just the function symbols of the original .symtab section.

gdb and elfutils support using/reading .gnu_debugdata upstream. But it is only generated only by some obscure function inside the rpm find-debuginfo.sh script. This really should be its own reusable script/program.

An alternative might be to just not strip the full .symtab out together with the full debuginfo and maybe use the new ELF compressed section support. (valgrind needs the full .symtab in some cases – although only really for ld.so, and valgrind doesn’t support compressed sections at the moment).

Together with accurate unwind tables having the function symbols available (and not stripped away or put into a separate debug file that might not be immediately accessible) provides the minimal requirements for easy, simple and useful tracing and profiling.

Full debuginfo

Other debug information can be stored separately from the main executable, but we still need to generate it. Some recommendations:

  • Always use -g (-gdwarf-4 is the default in recent GCC)
  • Do NOT disable -fvar-tracking-assignments
  • gdb-add-index (.gdb_index)
  • Maybe use -g3 (adds macro definitions)

This is a lot of info and sadly somewhat entangled. But please always generate it and then strip it into a separate .debug file.

This will give you inlines (program structure, which code ended up where). Arguments to functions and local variables, plus which value they have at which point in the program.

The types and structures used by the program. Matching addresses to source lines. .gdb_index provides debuggers a quick way to navigate some of these structures, so they don’t need to scan it all at startup, even if you only want to use a small portion. -g3 used to be very expensive, but recent GCC versions generate much denser data. Nobody really uses them much though, since nobody generates them… so chicken, egg. Both indexing and dense macros are proposed as DWARFv5 extensions.

Not using -fvar-tracking-assignments, which is the default with gcc now, really provides very poor results. Some projects disable it because they are afraid that generating extra debuginfo will somehow impact the generated code. If it ever does that is a bug in GCC. If you do want to double check then you can enable GCC -fcompare-debug or define the environment variable GCC_COMPARE_DEBUG to explicitly make GCC check this invariant isn’t violated.

Compressing

Full debuginfo is big! So yes, compression is something to think about. But ELF section compression is the wrong level. It isn’t supported by many programs (valgrind for example doesn’t). There are two variants (if you use any please not .zdebug, which is now a deprecated GNU extension). It prevents simply mmapping the data and using an index to only read/use what you need. It causes very slow startup.

You should however use DWZ, the DWARF optimization and duplicate removal tool. Given all debuginfo in a package this tool will make sure duplicate DWARF information is stored in a common place, reducing the size of the individual debug files.

You could use both DWZ and ELF section compression together if you really want to get the most compression. But I would recommend using DWZ only and then compress the whole file(s) for storage (like in a package), but install them uncompressed for direct usage.

Sources

The DWARF debuginfo references sources, and you really want to have them easily available. So package the (generated) sources (as they were compiled) and put them somewhere under /usr/src/debug/[package-version]/.

There is however one gotcha. DWARF references the sources where they were build. So unless you put and build the sources precisely where you want to install them you will have to adjust the. This can be done in two ways:

  • rpm debugedit
  • gcc -fdebug-prefix-map=old=new

debugedit is both more and less flexible. It is more flexible because it provides you the actual source file list used in the DWARF describing the program. It is less flexible because it isn’t a full DWARF rewriter. It adjusts the location/directories as long as they are smaller… So setup a big enough build root PATH name. It is probably time to rewrite debugedit to support proper DWARF rewriting and make it an independent tool that can easily be reused not just by rpm.

Separating and “linking”

There are two ways to “link” your binaries to their debug files:

  • .gnu_debuglink section in main file with name (and CRC) of .debug file
  • /usr/lib/debug/.build-id/XX/XXXXXXXXXXXXXXXXXXXXXX.debug

The .gnu_debuglink name has to be searched under well known paths (/usr/lib/debug + original location and/or subdirs). This makes it fragile, but more tools support it and it is the fallback used for when there is no build-id. But it might be time to deprecate/remove it because they inherently conflict between package versions.

Fedora supports both linking build-id.debug -> debuglink file. Fedora also throws in extra link to main exe under .build-id. But in the debuginfo package, so that link could mismatch if the main package and debug package versions don’t match up. It is not recommended to mimic this setup.

Preventing conflict

This is work in progress in Fedora:

  • Want to install both 64-bit and 32-bit debug package.
  • Have older/newer version of a debuginfo package installed. (inspecting a core file).

By making debuginfo packages parallel installable across arches and versions you should be able easily trace, profile and debug 32 and 64 bit programs at the same time. Inspect a core file generated against slightly different versions of the executable and libraries installed on the developer machine. And be able to install all debug files matching the executables running in a container for deep inspection.

To get there:

  • Hash in full name-version-arch of package into build-id.
  • Get rid of .gnu_debuglink files.
  • No more build-id main file backlinks.
  • Put sources under full name-version-arch subdir

This is where I still have more questions than answers. build-ids can conflict for minor version updates (the files should be completely identical though). Should we hash-in the full package name to make them unique again or accept that multiple packages can provide the same ELF images/build-ids? Dropping .gnu_debuglink (or changing install/renamed paths) will need program updates. Should the build-id-main-file backlinks be moved into the main package?

Should we really package debug files?

We might also want to explore alternatives to parallel installable debuginfo packages. Would it make sense to completely do away with debuginfo packages by:

  • Making /usr/lib/debug and /usr/src/debug “magic” fuse file systems?
  • Populate through something like darkserver
  • Have a cross-distro federated registry of build-ids?

Something like the above is being experimented with in the Clear Linux Project.

ELF, libelf, compressed sections and elfutils

ELF (Executable and Linkable Format) files are the standard binary format for GNU/Linux and other Unix-like systems. They are used for executables, shared libraries, object and core files. There are two ways the data in an ELF file are described. The Program Headers describe the segments of the file that need to be mapped into memory for runtime execution. The Section Headers describe what kind of data is in the different sections of the file (executable code, symbols, strings, etc.), a name and miscellaneous information often only needed for linking object files together. Normally only sections that have the SHF_ALLOC flag set are also described in the Program Headers. But two or more sections with the same flags can be combined into one segment (if they are placed consecutively in the file). Most section data that isn’t allocated can be removed from executable and shared library files because they aren’t strictly necessary at runtime and won’t be automatically loaded into memory by the kernel or dynamic loader. Sections such as string or symbol tables and debug information are often stripped out of the original file and put in a separate (ELF) debuginfo file. This doesn’t (or shouldn’t) impact anything during runtime, but does make understanding what is going on, which address corresponds to which function symbol or original source line trickier. Stripping ELF files of non-allocated sections is often done to save disk space.

Another way to save space is by compressing data. The GNU toolchain supports a way to compress individual ELF sections by renaming a section name from .debug* to .zdebug*. The data for such sections starts with 4 chars ZLIB, a 64bit unsigned integer in big endian encoding to provide the original section sh_size and then the ZLIB compressed data. This convention is supported by various GNU tools, but not very widely outside those. So it would work with GDB but not with valgrind for example. elfutils only provided partial support for GNU style .zdebug sections in ET_EXEC or ET_DYN ELF files, but not in ET_REL files like kernel modules (or their separate debuginfo files). The reason for the partial support was that depending on the name of a section is a little awkward and might not always be convenient to do (section names themselves are placed in a section which you have to locate first). For example it makes determining how to apply relocations (which you might have to do for ET_REL files) tricky, because you first need to lookup the target section name to determine whether or not it needs to be decompressed first. So this works great if you work with just the GNU tools for fully linked ELF files and for the specially named sections that contain DWARF information, but not really for any other ELF data.

Ali Bahrami, who works on the core Solaris OS and linker, liked the basic idea of GNU compressed ELF sections, but wanted to have something more generic that didn’t depend on magic section names. So he started an effort to extend the ELF specification to provide a standardized way that could be adopted by anything that supports ELF. The ELF specification is contained in the System V Application Binary Interface, also known as the Generic ABI (gABI), which is maintained on a public mailinglist generic-abi. This resulted in the following definitions, as implemented in GLIBC (2.22+) elf.h:

#define SHF_COMPRESSED      (1 << 11)  /* Section with compressed data. */

/* Section compression header.  Used when SHF_COMPRESSED is set.  */

typedef struct
{
  Elf32_Word   ch_type;        /* Compression format.  */
  Elf32_Word   ch_size;        /* Uncompressed data size.  */
  Elf32_Word   ch_addralign;   /* Uncompressed data alignment.  */
} Elf32_Chdr;

typedef struct
{
  Elf64_Word   ch_type;        /* Compression format.  */
  Elf64_Word   ch_reserved;
  Elf64_Xword  ch_size;        /* Uncompressed data size.  */
  Elf64_Xword  ch_addralign;   /* Uncompressed data alignment.  */
} Elf64_Chdr;

/* Legal values for ch_type (compression algorithm).  */
#define ELFCOMPRESS_ZLIB       1          /* ZLIB/DEFLATE algorithm.  */
#define ELFCOMPRESS_LOOS       0x60000000 /* Start of OS-specific.  */
#define ELFCOMPRESS_HIOS       0x6fffffff /* End of OS-specific.  */
#define ELFCOMPRESS_LOPROC     0x70000000 /* Start of processor-specific.  */
#define ELFCOMPRESS_HIPROC     0x7fffffff /* End of processor-specific.  */

The new SHF_COMPRESSED flag is set on the section sh_flags and indicates the section is compressed. Such sections start with a Chdr (Elf32_Chdr for 32bit ELF files, Elf64_Chdr for 64bit ELF files) followed by the compression data. The Chdr values are encoded according to the big/little endianess of the ELF file. There is only one ch_type standard compression type defined (ELFCOMPRESS_ZLIB), but lots of room for alternatives. Note that zero is not a valid value (and does NOT mean uncompressed). The ch_size is the original (uncompressed) sh_size of the section (a compressed section sh_size is the size of the compressed data plus the size of the Chdr). The ch_addralign is the section sh_addralign for the uncompressed data (the compressed section sh_addralign is the alignment of the Chdr plus compressed data if it needs one).

In this scheme all indexes into a section, like relocations or string table index, are assumed to apply to the uncompressed section data and never as index into the data of a compressed section. Also note that the section sh_entsize applies to the uncompressed data entry size (when the uncompressed section holds a table of same size entries). This last fact can potentially trigger some over eager sanity check failures for implementations that don’t understand the SHF_COMPRESSED flag yet when they try to check the sh_size is a multiple of the sh_entsize (it should be a multiple of the ch_size for compressed sections). Luckily that is somewhat rare (but would trigger in older elfutils when writing out a file with compressed sections and a sh_entsize > 0). Apart from those issues the change is mostly backward compatible for programs just reading ELF files. They can treat compressed sections as containing opaque data as long as they don’t need to interpret it (which is mostly true for unallocated SHT_PROGBITS sections to which compression will most likely have been applied). The sections can be moved and copied around as is, as long as the sh_flags are kept in tact. Anything dealing with only program headers and runtime execution of ELF isn’t impacted at all.

Besides standardizing the ELF file format change we also collaborated on some interfaces to easily use and manipulate ELF files containing compressed sections. The libelf library is not really formally standardized but through the generic-abi mailinglist (and sometimes private emails between the maintainers) we try to keep the interfaces source compatible. It provides two sets of interfaces, the libelf.h interface, which mainly abstracts away the on-disk and native in-memory representations (so you can easily read and manipulate big endian ELF files on a little endian platform, or the other way around) and the gelf.h interface which abstracts away the differences between 32 bit and 64 bit ELF files. elfutils provides the libelf implementation for GNU/Linux, but there are also other implementations including for BSD and proprietary UNIX-like systems as Solaris.

To support compressed sections in libelf we came up with two simple interfaces (after a long debate discussing various much more complex variants and scratching our heads how to deal with various corner cases). First there are some extensions to get the Chdr, as implemented in elfutils libelf.h to get the Chdr in the correct in-memory representation:

/* Returns compression header for a section if section data is
   compressed.  Returns NULL and sets elf_errno if the section isn't
   compressed or an error occurred.  */
extern Elf32_Chdr *elf32_getchdr (Elf_Scn *__scn);
extern Elf64_Chdr *elf64_getchdr (Elf_Scn *__scn);

And for elfutils gelf.h to abstract away 32/64 bit differences:

/* Header of a compressed section.  */
typedef Elf64_Chdr GElf_Chdr;

/* Get compression header of section if any.  Returns NULL and sets
   elf_errno if the section isn't compressed or an error occurred.  */
extern GElf_Chdr *gelf_getchdr (Elf_Scn *__scn, GElf_Chdr *__dst);

Then there are the following two functions (and one flag) for compressing/decompressing a section for both the new and old (deprecated) GNU format as implemented in elfutils libelf.h:

/* Flags for elf_compress[_gnu].  */
enum
{
  ELF_CHF_FORCE = 0x1
#define ELF_CHF_FORCE ELF_CHF_FORCE
};

/* Compress or decompress the data of a section and adjust the section
   header.

   elf_compress works by setting or clearing the SHF_COMPRESS flag
   from the section Shdr and will encode or decode a Elf32_Chdr or
   Elf64_Chdr at the start of the section data.  elf_compress_gnu will
   encode or decode any section, but is traditionally only used for
   sections that have a name starting with ".debug" when
   uncompressed or ".zdebug" when compressed and stores just the
   uncompressed size.  The GNU compression method is deprecated and
   should only be used for legacy support.

   elf_compress takes a compression type that should be either zero to
   decompress or an ELFCOMPRESS algorithm to use for compression.
   Currently only ELFCOMPRESS_ZLIB is supported.  elf_compress_gnu
   will compress in the traditional GNU compression format when
   compress is one and decompress the section data when compress is
   zero.

   The FLAGS argument can be zero or ELF_CHF_FORCE.  If FLAGS contains
   ELF_CHF_FORCE then it will always compress the section, even if
   that would not reduce the size of the data section (including the
   header).  Otherwise elf_compress and elf_compress_gnu will compress
   the section only if the total data size is reduced.

   On successful compression or decompression the function returns
   one.  If (not forced) compression is requested and the data section
   would not actually reduce in size, the section is not actually
   compressed and zero is returned.  Otherwise -1 is returned and
   elf_errno is set.

   It is an error to request compression for a section that already
   has SHF_COMPRESSED set, or (for elf_compress) to request
   decompression for an section that doesn't have SHF_COMPRESSED set.
   It is always an error to call these functions on SHT_NOBITS
   sections or if the section has the SHF_ALLOC flag set.
   elf_compress_gnu will not check whether the section name starts
   with ".debug" or .zdebug".  It is the responsibilty of the caller
   to make sure the deprecated GNU compression method is only called
   on correctly named sections (and to change the name of the section
   when using elf_compress_gnu).

   All previous returned Shdrs and Elf_Data buffers are invalidated by
   this call and should no longer be accessed.

   Note that although this changes the header and data returned it
   doesn't mark the section as dirty.  To keep the changes when
   calling elf_update the section has to be flagged ELF_F_DIRTY.  */
extern int elf_compress (Elf_Scn *scn, int type, unsigned int flags);
extern int elf_compress_gnu (Elf_Scn *scn, int compress, unsigned int flags);

Beside those main additions to the interfaces the definition of elf_strptr was changed so that for a compressed section the returned string for the given index is the uncompressed string (and not a pointer into the compressed data) and elf_getdata was changed so that for a compressed section the returned Elf_Data has a d_type of ELF_T_CHDR, which is a new type that works as expected with the xlate functions to translate the Chdr contained in the section data to/from big/little endian format if the on-disk format and in-memory representation are different. These last two changes only work for the newly standardized ELF compressed sections, not the for old, now deprecated, GNU format.

The latest release of elfutils 0.165 also comes with the eu-elfcompress tool (Solaris will have a similar tool called elfcompress) that lets you easily play with the concept of compressed ELF sections:

Usage: eu-elfcompress [OPTION...] FILE...
Compress or decompress sections in an ELF file.

  -f, --force                Force compression of section even if it would
                             become larger
  -n, --name=SECTION         SECTION name to (de)compress, SECTION is an
                             extended wildcard pattern (defaults to
                             '.?(z)debug*')
  -o, --output=FILE          Place (de)compressed output into FILE
  -p, --permissive           Relax a few rules to handle slightly broken ELF
                             files
  -q, --quiet                Be silent when a section cannot be compressed
  -t, --type=TYPE            What type of compression to apply. TYPE can be
                             'none' (decompress), 'zlib' (ELF ZLIB compression,
                             the default, 'zlib-gabi' is an alias) or
                             'zlib-gnu' (.zdebug GNU style compression, 'gnu'
                             is an alias)
  -v, --verbose              Print a message for each section being
                             (de)compressed
  -?, --help                 Give this help list
      --usage                Give a short usage message
  -V, --version              Print program version

Finally in elfutils 0.165 the libdw library, which provides interfaces to use DWARF debugging information in ELF files, plus various helpers for reading symbol tables, finding separate debuginfo files corresponding to shared libraries, executables, the kernel (modules), core files or running processes and producing backtraces, now transparently works with compressed ELF sections. So if you are just using the elfutils libdw.h or libdwfl.h interfaces all of the above is just an implementation detail.

Looking forward to GCC6 – nice new warning -Wmisleading-indentation

The GNU Compiler Collection is making some great progress. Playing with the current development version I cannot wait till GCC6 is officially out. The new warnings look beautiful and are more useful because of the range tracking. Just found an embarrassing bug thanks to the new -Wmisleading-indentation

libebl/eblobjnote.c: In function ‘ebl_object_note’:
libebl/eblobjnote.c:135:5: error: statement is indented as if it were guarded by... [-Werror=misleading-indentation]
    switch (type)
    ^~~~~~

libebl/eblobjnote.c:45:3: note: ...this ‘if’ clause, but it is not
  if (! ebl->object_note (name, type, descsz, desc))
  ^~

And indeed, it should have been under the if, but wasn’t because of missing brackets. Woops. Thanks GCC.

Copyleft makes the (java) world turn around

Glad to see a little bit more copyleft being adopted by Android now that they are using parts of the OpenJDK class library. Even if the GNU Classpath Exception is probably the weakest form of copyleft there is. It is interesting how the GPL makes frenemies like Oracle and Google work together.

Software Freedom Conservancy

I support the Software Freedom Conservancy because they provide a virtual home for Free Software communities. In their own words:

Software Freedom Conservancy is a not-for-profit organization that helps promote, improve, develop, and defend Free, Libre, and Open Source Software (FLOSS) projects. Conservancy provides a non-profit home and infrastructure for FLOSS projects. This allows FLOSS developers to focus on what they do best — writing and improving FLOSS for the general public — while Conservancy takes care of the projects’ needs that do not relate directly to software development and documentation.

Some projects receive support from or are managed by companies or trade associations that benefit from the software the community produces. That is great as long as the community objectives and the company profit motives are aligned. Free Software is a good way for companies to work together. The services that the Conservancy provides allows projects to define their own terms and conditions for the community to work together. And companies can then join on equal terms. Making sure the project and community will work together for the public benefit.

Please support the Software Freedom Conservancy by donating so they will be able to provide a home to many more communities. A donation of 10 US dollars a month will make you an official sponsor. Or donate directly to one of their many member projects.

Software Freedom Conservancy Member Projects

Software Freedom Conservancy Member Projects

Easy Hacks for Valgrind

Last weekend I did a talk on How to start hacking on Valgrind by example at Fosdem which contain some Easy hacks for valgrind. If If you always wanted to hack on Valgrind, but haven’t yet really looked at the code yet, then this might be a nice introduction. Make sure to also read the slides for all the other Valgrind devroom talks. Much thanks to the Fosdem organization for letting the Valgrind hackers meet. It was a great weekend.

Appeal to Reason

Bradley M. Kuhn wrote an analysis on the recent Appeals Court Decision in Oracle v. Google. Pointing out who the real winners are and that it will now take years before we will have more clarity:

The case is remanded, so a new jury will first sit down and consider the fair use question. If that jury finds fair use and thus no infringement, Oracle’s next appeal will be quite weak, and the Appeals Court likely won’t reexamine the question in any detail. In that outcome, very little has changed overall: we’ll have certainty that API’s aren’t copyrightable, as long as any textual copying that occurs during reimplementation is easily called fair use. By contrast, if the new jury rejects Google’s fair use defense, I suspect Google will have to appeal all the way to SCOTUS. It’s thus going to be at least two years before anything definitive is decided, and the big winners will be wealthy litigation attorneys — as usual.

You will want to read the whole thing to know why from a copyleft perspective this decision will give that strange feeling of simultaneous annoyance and contentment.

Java bug CVE-2012-4681

There seems to be a nasty bug out there in some implementations of Java 7, including IcedTea7 and OpenJDK7. The bug is very public and being actively abused to circumvent security restrictions. Please upgrade to IcedTea 2.3.1 or build your packages using the patch as discussed on the OpenJDK mailinglists.

Note that if you are using the icedtea-web applet viewer then you are not directly vulnerable to the exploits as currently out there in the wild since those try to disable the SecurityManager completely and icedtea-web doesn’t allow that (some proprietary applet plugins do allow that though). But there are other ways to abuse this bug to circumvent security restrictions in a more subtle way, so patching is still very recommended.

classpath/icedtea server updates

Some classpath/icedtea servers changed networks/ip addresses on Sunday. Changes should propagate through DNS on Monday. This can cause connection errors to planet.classpath.org, builder.classpath.org (buildbot and jenkins) and icedtea.wildebeest.org (hg backups). Apologies for the late notice.