LLVM Backend for Native Image

Native Image includes an alternative backend which uses the LLVM intermediate representation and the LLVM compiler to produce native executables. To use it, add the -H:CompilerBackend=llvm option to the Native Image invocation.

The LLVM backend requires Graal’s LLVM toolchain to be installed (with gu install llvm-toolchain).

Code Generation Options #

  • -H:+SpawnIsolates: Enables isolates, which are disabled by default when using the LLVM backend as they incur a performance penalty.
  • -H:+BitcodeOptimizations: Enables aggressive optimizations at the LLVM bitcode level. This is experimental and may cause bugs.

Debugging Options #

  • -H:TempDirectory=: Specifies where the files generated by Native Image will be saved. The LLVM files are saved under SVM-<timestamp>/llvm in this folder.
  • -H:LLVMMaxFunctionsPerBatch=: Specifies the maximum size of a compilation batch*. Setting it to 1 compiles every function separately, 0 compiles everything as a single batch.
  • -H:DumpLLVMStackMap=: Specifies a file in which to dump debugging information, including a mapping between compiled functions and the name of the corresponding bitcode file.

About batches: LLVM compilation happens in four phases:

  1. LLVM bitcode files (named f0.bc, f1.bc, etc.) are created for each function.
  2. The bitcode files are linked into batches (named b0.bc, b1.bc, etc.). This phase is skipped when -H:LLVMMaxFunctionsPerBatch=1 is specified.
  3. The batches are optimized (into b0o.bc, b1o.bc, etc.) and then compiled (into b0.o, b1.o, etc.).
  4. The compiled batches are linked into a single object file (llvm.o), which is then linked into the final executable.

How to Add a Target Architecture to GraalVM Using the LLVM Backend #

An interesting use case for the LLVM backend is to target a new architecture without having to implement a complete new backend for Native Image. The following are the necessary steps to achieve this at the moment; we are striving to reduce them as much as possible.

Target-specific LLVM Settings #

There are a few instances where the Graal code has to go deeper than the target-independent nature of LLVM. These are most notably inline assembly snippets to implement direct register accesses and direct register jumps (for trampolines), as well as precisions about the structure of the stack frames of the code emitted by LLVM. All in all, this represents less than a dozen simple values to be set for each new target, and it is our goal that in the future this will be the only addition needed to support a new target.

(Complete set of values for AArch64)

LLVM statepoint support #

While the LLVM backend uses mostly common, well-supported features of LLVM, garbage collection support implies the use of statepoint intrinsics, an experimental feature of LLVM. Currently this feature is only supported for x86_64, and we are currently pushing for the inclusion our implementation for AArch64 in the code base. This means that, unless a significant effort is put together by the LLVM community, supporting a new architecture will require the implementation of statepoints in LLVM for the requested target. As most of the statepoint logic is handled at the bitcode level, i.e. at a target-independent stage, this is mostly a matter of emitting the right type of calls to lower the statepoint intrinsics. Our AArch64 implementation of statepoints consists of less than 100 lines of code.

(Implementation of statepoints for AArch64)

Object File Support #

The data section for programs created with the LLVM backend of the Graal compiler is currently emitted independently from the code, which is handled by LLVM. This means that Graal needs an understanding of object file relocations for the target architecture to be able to link the LLVM-compiled code with the Graal-generated data section. Emitting the data section with the code as LLVM bitcode is our next priority for the LLVM backend, so this should not be an issue for future targets.

(see ELFMachine$ELFAArch64Relocations for an example)