- 22.1 Release
- 22.0 Release
- 21.3 Release
- Dev Build
- Reference Manuals
- Native Image
- Build Output
- Build Configuration
- Tracing Agent
- Experimental Agent Options
- Native Image Compatibility and Optimization Guide
- Class Initialization in Native Image
- Static Native Images
- Native Image Options
- Native Image Hosted and Runtime Options
- Native Image C API
- Implementing Native Methods in Java with Native Image
- LLVM Backend for Native Image
- Debug Info Feature
- Points-to Analysis Reports
- Using System Properties in Native Image
- Profile-Guided Optimizations
- Memory Management at Image Run Time
- Generating Heap Dumps from Native Images
- JDK Flight Recorder with Native Image
- JCA Security Services on Native Image
- Dynamic Proxy on Native Image
- Java Native Interface (JNI) on Native Image
- Reflection on Native Image
- Accessing Resources in Native Images
- Logging on Native Image
- URL Protocols on Native Image
- Native Image ARM64 Support
- Java on Truffle
- GraalVM Updater
- Embedding Reference
- Polyglot Programming
- Java Reference
- LLVM Languages Reference
- Python Reference
- Ruby Reference
- R Reference
- WebAssembly Reference
LLVM Backend for Native Image
Native Image includes an alternative backend which uses the LLVM intermediate representation and the LLVM compiler to produce native executables.
To use it, add the
-H:CompilerBackend=llvm option to the Native Image invocation.
The LLVM backend requires GraalVM’s LLVM toolchain to be installed (with
gu install llvm-toolchain).
Code Generation Options #
-H:+SpawnIsolates: enables isolates, which are disabled by default when using the LLVM backend as they incur a performance penalty.
-H:+BitcodeOptimizations: enables aggressive optimizations at the LLVM bitcode level. This is experimental and may cause bugs.
Debugging Options #
-H:TempDirectory=: specifies where the files generated by Native Image will be saved. The LLVM files are saved under
SVM-<timestamp>/llvmin this folder.
-H:LLVMMaxFunctionsPerBatch=: specifies the maximum size of a compilation batch*. Setting it to 1 compiles every function separately, 0 compiles everything as a single batch.
-H:DumpLLVMStackMap=: specifies a file in which to dump debugging information, including a mapping between compiled functions and the name of the corresponding bitcode file.
About batches: LLVM compilation happens in four phases:
- LLVM bitcode files (named
f1.bc, etc.,) are created for each function.
- The bitcode files are linked into batches (named
b1.bc, etc.). This phase is skipped when
- The batches are optimized (into
b1o.bc, etc.,) and then compiled (into
- The compiled batches are linked into a single object file (
llvm.o), which is then linked into the final executable.
How to Add a Target Architecture to GraalVM Using LLVM Backend #
An interesting use case for the LLVM backend is to target a new architecture without having to implement a complete new backend for Native Image. The following are the necessary steps to achieve this at the moment.
Target-Specific LLVM Settings #
There are a few instances where the GraalVM code has to go deeper than the target-independent nature of LLVM. These are most notably inline assembly snippets to implement direct register accesses and direct register jumps (for trampolines), as well as precisions about the structure of the stack frames of the code emitted by LLVM. All in all, this represents less than a dozen simple values to be set for each new target. It is our goal that in the future this will be the only addition needed to support a new target.
LLVM Statepoint Support #
While the LLVM backend uses mostly common, well-supported features of LLVM, garbage collection support implies the use of statepoint intrinsics, an experimental feature of LLVM. Currently this feature is only supported for x86_64, and we are currently pushing for the inclusion GraalVM implementation for AArch64 in the code base. This means that, unless a significant effort is put together by the LLVM community, supporting a new architecture will require the implementation of statepoints in LLVM for the requested target. As most of the statepoint logic is handled at the bitcode level, i.e., at a target-independent stage, this is mostly a matter of emitting the right type of calls to lower the statepoint intrinsics. Our AArch64 implementation of statepoints consists of less than 100 lines of code.
Object File Support #
The data section for programs created with the LLVM backend of the Graal compiler is currently emitted independently from the code, which is handled by LLVM. This means that the Graal compiler needs an understanding of object file relocations for the target architecture to be able to link the LLVM-compiled code with the GraalVM-generated data section. Emitting the data section with the code as LLVM bitcode is our next priority for the LLVM backend, so this should not be an issue for future targets.
ELFMachine$ELFAArch64Relocations for an example)