Running LLVM on GraalVM

GraalVM provides an implementation of the lli tool to directly execute programs from LLVM bitcode.

In contrast to static compilation that is normally used for LLVM based languages, LLI first interprets the bitcode and then dynamically compiles the hot parts of the program using the GraalVM compiler. This allows seamless interoperability with the dynamic languages supported by GraalVM.

Run programs in LLVM bitcode format:

lli [LLI Options] [GraalVM Options] [Polyglot Options] filename.bc [program args]

Where filename.bc is a single program source file in LLVM bitcode format.

Note: LLVM bitcode is platform dependent. The program must be compiled to bitcode for the appropriate platform.

Compiling to LLVM Bitcode

GraalVM can execute C/C++, Rust, and other languages that can be compiled to LLVM bitcode. As a first step, you have to compile the program to LLVM bitcode using an LLVM frontend such as clang. C/C++ code can be compiled to LLVM bitcode using clang with the -emit-llvm option.

Here is some example C code named hello.c:

#include <stdio.h>

int main() {
    printf("Hello from GraalVM!\n");
    return 0;

You can compile hello.c to an LLVM bitcode file named hello.bc as follows:

$ clang -c -O1 -emit-llvm hello.c

You can then run hello.bc on GraalVM like this:

$ lli hello.bc
Hello from GraalVM!

External library dependencies

If the bitcode file depends on external libraries, they can be loaded using the --lib argument.

For example:

#include <unistd.h>
#include <ncurses.h>

int main() {
    printw("Hello, Curses!");
    return 0;

This can be run with:

$ clang -c -O1 -emit-llvm hello-curses.c
$ lli --lib /usr/lib/ hello-curses.bc

For Mac OS users, ncurses library path is /usr/lib/libncurses.dylib.

Running C++

For running C++ code, LLI requires the libc++ standard library from the LLVM project. If you are using Ubuntu Linux, install the package libc++1.

#include <iostream>

int main() {
    std::cout << "Hello, C++ World!" << std::endl;

Make sure you compile your C++ code with the correct standard library:

$ clang++ -c -O1 -emit-llvm -stdlib=libc++ hello-c++.cpp
$ lli hello-c++.bc
Hello, C++ World!

Running Rust

LLI does not load the Rust standard libraries automatically. To install Rust, run the following in your terminal, then follow the onscreen instructions:

curl -sSf | sh

To run Rust code, the required Rust libraries have to be specified manually.

fn main() {
    println!("Hello Rust!");

This can be run with:

$ rustc --emit=llvm-bc
$ lli --lib $(rustc --print sysroot)/lib/libstd-* hello-rust.bc
Hello Rust!

LLVM Toolchain

The toolchain is a set of tools and APIs for compiling native programs, such as C and C++, to bitcode that can be executed with the GraalVM LLVM runtime. To simplify compiling C/C++ to LLVM bitcode, we provide launchers that invoke the compiler with special flags to produce results that can be executed by the GraalVM LLVM runtime. Depending on the execution mode, the launchers location is $GRAALVM/jre/languages/llvm/{native|managed}/bin/* (in the native execution mode, the path is $GRAALVM/jre/languages/llvm/native/bin/*, in the managed mode – $GRAALVM/jre/languages/llvm/managed/bin/*). They are meant to be drop-in replacements for the C/C++ compiler when compiling a native program. The goal is to produce a GraalVM LLVM runtime executable by simply pointing the build system to those launchers, for example via CC/CXX environment variables or by setting PATH.

Warning: Toolchain support is experimental. Experimental features might never be included in a production version, or might change significantly before being considered production-ready.

The LLVM toolchain is pre-packaged as a component and can be installed with GraalVM Updater tool:

gu install llvm-toolchain

The following example shows how the toolchain can be used to compile a make-based project. Let us assume that the CC variable is used in the Makefile to specify the C compiler that produces an executable named myprogram. We compile the project as follows:

$ make CC=${GRAALVM}/jre/languages/llvm/native/bin/clang myprogram

Afterwards, the resulting myprogram can be executed by the LLVM runtime:

$ ${GRAALVM}/bin/lli myprogram

Use Cases

  • Simplify the compilation to bitcode: GraalVM users who want to run native projects via the GraalVM LLVM runtime must first compile these projects to LLVM bitcode. Although it is possible to do this with the standard LLVM tools (clang, llvm-link, etc.), there are several additional considerations, such as optimizations and manual linking. The toolchain aims to simplify this process, by providing an out-of-the-box drop-in replacement for the compiler when building native projects targeting the GraalVM LLVM runtime.

  • Compile native extensions: GraalVM language implementers often use the GraalVM LLVM runtime to execute native extensions, and these extensions are commonly installed by a package manager. For example, packages in Python are usually added via pip install, which means that the Python implementation is required to be able to compile these native extensions on demand. The toolchain provides a Java API for languages to access the tools bundled with GraalVM.

  • Compile to bitcode at build time: GraalVM supported languages that integrate with the GraalVM LLVM runtime usually need to build bitcode libraries to integrate with the native pieces of their implementation. The toolchain can be used as a build-time dependency to achieve this in a standardized and compatible way.

File Format

To be compatible with existing build systems, by default, the toolchain will produce native executables with embedded bitcode (ELF files on Linux, Mach-O files on macOS).

Toolchain Identifier

The GraalVM LLVM runtime can be ran in different configurations, which can differ in how the bitcode is being compiled. Generally, toolchain users do not need to be concerned, as the GraalVM LLVM runtime knows the mode it is running and will always provide the right toolchain. However, if a language implementation wants to store the bitcode compilation for later use, it will need to be able to identify the toolchain and its configurations used to compile the bitcode. To do so, each toolchain has an identifier. Conventionally, the identifier denotes the compilation output directory. The internal GraalVM LLVM runtime library layout follows the same approach.

Java API Toolchain Service

Language implementations can access the toolchain via the Toolchain service. The service provides two methods:

  • TruffleFile getToolPath(String tool) returns the path to the executable for a given tool. Every implementation is free to choose its own set of supported tools. The command line interface of the executable is specific to the tool. If a tool is not supported or not known, null is returned.
  • String getIdentifier() returns the identifier for the toolchain. It can be used to distinguish results produced by different toolchains. The identifier can be used as a path suffix to place results in distinct locations, therefore it does not contain special characters like slashes or spaces.

The Toolchain lives in the SULONG_API distribution. The LLVM runtime will always provide a toolchain that matches its current mode. The service can be looked-up via the Env:

LanguageInfo llvmInfo = env.getInternalLanguages().get("llvm");
Toolchain toolchain = env.lookup(llvmInfo, Toolchain.class);
TruffleFile toolPath = toolchain.getToolPath("CC");
String toolchainId = toolchain.getIdentifier();


GraalVM supports several other programming languages, including JavaScript, Python, Ruby, and R. While LLI is designed to run LLVM bitcode, it also provides an API for programming language interoperability that lets you execute code from any other language that GraalVM supports.

Dynamic languages like JavaScript usually access object members by name. Since normally names are not preserved in LLVM bitcode, it must be compiled with debug info enabled.

The following example demonstrates how you can use the API for interoperability with other programming languages.

Let us define a C struct for points and implement allocation functions:

// cpart.c
#include <polyglot.h>

#include <stdlib.h>
#include <stdio.h>

struct Point {
    double x;
    double y;


void *allocNativePoint() {
    struct Point *ret = malloc(sizeof(*ret));
    return polyglot_from_Point(ret);

void *allocNativePointArray(int length) {
    struct Point *ret = calloc(length, sizeof(*ret));
    return polyglot_from_Point_array(ret, length);

void freeNativePoint(struct Point *p) {

void printPoint(struct Point *p) {
    printf("Point<%f,%f>\n", p->x, p->y);

Make sure GRAALVM_HOME resolves to the GraalVM installation directory, then compile the cpart.c file with:

$ clang -g -O1 -c -emit-llvm -I$GRAALVM_HOME/jre/languages/llvm/include cpart.c

You can access your C/C++ code from other languages like JavaScript:

// jspart.js

// Load and parse the LLVM bitcode into GraalVM
var cpart = Polyglot.evalFile("llvm" ,"cpart.bc");

// Allocate a light-weight C struct
var point = cpart.allocNativePoint();

// Access it as if it were a JS object
point.x = 5;
point.y = 7;

// Pass it back to a native function

// We can also allocate an array of structs
var pointArray = cpart.allocNativePointArray(15);

// We can access this array like it was a JS array
for (var i = 0; i < pointArray.length; i++) {
    var p = pointArray[i];
    p.x = i;
    p.y = 2*i;


// We can also pass a JS object to a native function
cpart.printPoint({x: 17, y: 42});

// Don't forget to free the unmanaged data objects

Run this JavaScript file with:

$ js --polyglot jspart.js

Polyglot C API

There are also lower level API functions for directly accessing polyglot values from C. See the Polyglot Reference and the documentation comments in polyglot.h for more details.

For example, this program allocates and accesses a Java array from C:

#include <stdio.h>
#include <polyglot.h>

int main() {
    void *arrayType = polyglot_java_type("int[]");
    void *array = polyglot_new_instance(arrayType, 4);
    polyglot_set_array_element(array, 2, 24);
    int element = polyglot_as_i32(polyglot_get_array_element(array, 2));
    printf("%d\n", element);
    return element;

Compile it to LLVM bitcode:

$ clang -g -O1 -c -emit-llvm -I$GRAALVM_HOME/jre/languages/llvm/include polyglot.c

And run it, using the --jvm argument to run GraalVM in the JVM mode, since we are using a Java type:

$ lli --jvm polyglot.bc

Embedding in Java

GraalVM can also be used to embed LLVM bitcode in Java host programs.

For example, let us write a Java class that embeds GraalVM to run the previous example:

import org.graalvm.polyglot.*;

class Polyglot {
    public static void main(String[] args) throws IOException {
        Context polyglot = Context.newBuilder().
        File file = new File("polyglot.bc");
        Source source = Source.newBuilder("llvm", file).build();
        Value cpart = polyglot.eval(source);

Compiling and running it:

$ javac
$ java Polyglot

See the Embedding documentation for more information.

Source-Level Debugging

You can use GraalVM’s Debugger to debug the program you compiled to LLVM bitcode. To use this feature, please make sure to compile your program with debug information by specifying the -g argument when compiling with clang. This gives you the ability to step through the program’s source code and set breakpoints in it. To also be able to inspect the local and global variables of your program you may pass --llvm.enableLVI=true as argument to lli. This option is not enabled per default as it can significantly decrease your program’s run-time performance.

LLVM Compatibility

GraalVM works with LLVM bitcode versions 3.8 to 7.0.

Optimization Flags

In contrast to the static compilation model of LLVM languages, in GraalVM the machine code is not directly produced from the LLVM bitcode, but there is an additional dynamic compilation step by the GraalVM compiler.

In this scenario, first the LLVM frontend (e.g. clang) does optimizations on the bitcode level, and then the GraalVM compiler does its own optimizations on top of that during dynamic compilation. Some optimizations are better when done ahead-of-time on the bitcode, while other optimizations are better left for the dynamic compilation of the GraalVM compiler, when profiling information is available.

In principle, all optimization levels should work, but for best results we suggest compiling the bitcode with optimization level -O1.

Cross-language interoperability will only work when the bitcode is compiled with debug information enabled (-g), and the -mem2reg optimization is performed on the bitcode (compiled with at least -O1, or explicitly using the opt tool).

LLI Command Options

-L <path>/--llvm.libraryPath=<path>: a list of paths where GraalVM will search for library dependencies. Paths are delimited by :.

--lib <libs>/--llvm.libraries=<libs>: a list of libraries to load. The list can contain precompiled native libraries (*.so/*.dylib) and bitcode libraries (*.bc). Files with a relative path are looked up relative to llvm.libraryPath. Entries are delimited by :.

--llvm.enableLVI=<true/false>: enable source-level symbol inspection in the debugger. This defaults to false as it can decrease run-time performance.

--llvm.managed enable a managed execution mode for LLVM IR code, which means memory allocations from LLVM bitcode are done on the managed heap. This article explains the managed execution in every detail.

--version prints the version and exit.

--version:graalvm prints GraalVM version information and exit.

Expert and Diagnostic Options

Use --help and --help:<topic> to get a full list of options.

Limitations and Differences to Native Execution

LLVM code interpreted or compiled with the default configuration of GraalVM Community or Enterprise editions will not have the same characteristics as the same code interpreted or compiled in a managed environment, enabled with the --llvm.managed option on top of GraalVM Enterprise. The behavior of the lli interpreter tool used to directly execute programs in LLVM bitcode format differs between native and managed modes. The difference lies in safety guarantees and cross-language interoperability.

In the default configuration, cross-language interoperability requires bitcode to be compiled with the debug information enabled (-g), and the -mem2reg optimization is performed on the bitcode (compiled with at least -O1, or explicitly using the opt tool). These requirements can be overcome in a managed environment of GraalVM EE that allows native code to participate in the polyglot programs, passing and receiving the data from any other supported language. In terms of security, the execution of native code in a managed environment passes with additional safety features: catching illegal pointer accesses, accessing arrays outside of the bounds, etc..

There are certain limitations and differences to the native execution depending on the GraalVM edition. Consider them respectively.

Limitations and Differences to Native Execution on Top of GraalVM CE

The LLVM interpreter in GraalVM Community Edition environment allows executing LLVM bitcode within a multilingual context. Even though it aspires to be a generic LLVM runtime, there are certain fundamental and/or implementational limitations that users need to be aware of.

The following restrictions and differences to native execution (i.e., bitcode compiled down to native code) exist when LLVM bitcode is executed with the LLVM interpreter on top of GraalVM CE:

  • The GraalVM LLVM interpreter assumes that bitcode was generated to target the x86_64 architecture.
  • Bitcode should be the result of compiling C/C++ code using clang version 7, other compilers/languages, e.g., Rust, might have specific requirements that are not supported.
  • Unsupported functionality – it is not possible to call any of the following functions:
    • clone()
    • fork()
    • vfork()
    • setjmp(), sigsetjmp(), longjmp(), siglongjmp()
    • Functions of the exec() function family
    • Pthread functions
    • Code running in the LLVM interpreter needs to be aware that a JVM is running in the same process, so many syscalls such as fork, brk, sbrk, futex, mmap, rt_sigaction, rt_sigprocmask, etc. might not work as expected or cause the JVM to crash.
    • Calling unsupported syscalls or unsupported functionality (listed above) via native code libraries can cause unexpected side effects and crashes.
  • Thread local variables
    • Thread local variables from bitcode are not compatible with thread local variables from native code.
  • Cannot rely on memory layout
    • Pointers to thread local variables are not stored in specific locations, e.g., the FS segment.
    • The order of globals in memory might be different, consequently no assumptions about their relative locations can be made.
    • Stack frames cannot be inspected or modified using pointer arithmetic (overwrite return address, etc.).
    • Walking the stack is only possible using the Truffle APIs.
    • There is a strict separation between code and data, so that reads, writes and pointer arithmetic on function pointers or pointers to code will lead to undefined behavior.
  • Signal handlers
    • Installing signal handlers is not supported.
  • The stack
    • The default stack size is not set by the operating system but by the option --llvm.stackSize.
  • Dynamic linking
    • Interacting with the LLVM bitcode dynamic linker is not supported, e.g., dlsym/dlopen can only be used for native libraries.
    • The dynamic linking order is undefined if native libraries and LLVM bitcode libraries are mixed.
    • Native libraries cannot import symbols from bitcode libraries.
  • x86_64 inline assembly is not supported.
  • Undefined behavior according to C spec
    • While most C compilers map undefined behavior to CPU semantics, the GraalVM LLVM interpreter might map some of this undefined behavior to Java or other semantics. Examples include: signed integer overflow (mapped to the Java semantics of an arithmetic overflow), integer division by zero (will throw an ArithmeticException), oversized shift amounts (mapped to the Java behavior).
  • Floating point arithmetics
    • Some floating point operations and math functions will use more precise operations and cast the result to a lower precision (instead of performing the operation at a lower precision).
    • Only the rounding mode FE_TONEAREST is supported.
    • Floating point exceptions are not supported.
  • NFI limitations (calling real native functions)
    • Structs, complex numbers, or fp80 values are not supported as by-value arguments or by-value return values.
    • The same limitation applies to calls back from native code into interpreted LLVM bitcode.
  • Limitations of polyglot interoperability (working with values from other GraalVM languages)
    • Foreign objects cannot be stored in native memory locations. Native memory locations include:
      • globals (except the specific case of a global holding exactly one pointer value);
      • malloc’ed memory (including c++ new, etc.);
      • stack (e.g. escaping automatic variables).
  • LLVM instruction set support (based on LLVM 7.0.1)
    • A set of rarely-used bitcode instructions are not available (va_arg, catchpad, cleanuppad, catchswitch, catchret, cleanupret, fneg, callbr).
    • The instructions with limited support:
      • atomicrmw (only supports sub, add, and, nand, or, xor, xchg);
      • extract value and insert value (only supports a single indexing operand);
      • cast (missing support for certain rarely-used kinds);
      • atomic ordering and address space attributes of load and store instructions are ignored.
    • Values – assembly constants are not supported (module-level assembly and any assembly strings).
    • Types:
      • There is no support for 128-bit floating point types (fp128 and ppc_fp128), x86_mmx, half-precision floats (fp16) and any vectors of unsupported primitive types.
      • The support for fp80 is limited (not all intrinsics are supported for fp80, some intrinsics or instructions might silently fall back to fp64).
  • A number of rarely-used or experimental intrinsics based on LLVM 7.0.1 are not supported because of implementational limitations or because they are out of scope:
    • experimental intrinsics: llvm.experimental.*,,;
    • trampoline intrinsics: llvm.init.trampoline, llvm.adjust.trampoline;
    • general intrinsics: llvm.var.annotation, llvm.ptr.annotation, llvm.annotation, llvm.codeview.annotation, llvm.trap, llvm.debugtrap, llvm.stackprotector, llvm.stackguard, llvm.ssa_copy, llvm.type.test, llvm.type.checked.load, llvm.load.relative, llvm.sideeffect;
    • specialised arithmetic intrinsics: llvm.canonicalize, llvm.fmuladd;
    • standard c library intrinsics: llvm.fma, llvm.trunc, llvm.nearbyint, llvm.round;
    • code generator intrinsics: llvm.returnaddress, llvm.addressofreturnaddress, llvm.frameaddress, llvm.localescape, llvm.localrecover, llvm.read_register, llvm.write_register, llvm.stacksave, llvm.stackrestore, llvm.get.dynamic.area.offset, llvm.prefetch, llvm.pcmarker, llvm.readcyclecounter, llvm.clear_cache, llvm.instrprof*, llvm.thread.pointer;
    • exact gc intrinsics: llvm.gcroot, llvm.gcread, llvm.gcwrite;
    • element wise atomic memory intrinsics: llvm.*.element.unordered.atomic;
    • masked vector intrinsics: llvm.masked.*;
    • bit manipulation intrinsics: llvm.bitreverse, llvm.fshl, llvm.fshr.

Limitations and Differences to Managed Execution on Top of GraalVM EE

A managed execution for LLVM intermediate representation code is GraalVM Enterprise Edition feature and can be enabled with --llvm.managed command line option. In the managed mode, GraalVM LLVM prevents access to unmanaged memory and uncontrolled calls to native code and operating system functionality. The allocations are performed in the managed Java heap, and accesses to the surrounding system are routed through proper Truffle API and Java API calls.

All the restrictions from the default native LLVM execution on GraalVM apply to the managed execution, but with the following differences/changes:

  • Platform independent
    • Bitcode must be compiled for the a generic linux_x86_64 target, using the provided musl libc library, on all platforms, regardless of the actual underlying operating system.
  • C++
    • C++ is currently not supported in a managed mode.
  • Native memory and code
    • Calls to native functions are not possible, thus only the functionality provided in the supplied musl libc and by the GraalVM LLVM interface is available.
    • Loading native libraries is not possible.
    • Native memory access is not possible.
  • System calls
    • System calls with only limited support are read, readv, write, writev, open, close, dup, dup2, lseek, stat, fstat, lstat, chmod, fchmod, ioctl, fcntl, unlink, rmdir, utimensat, uname, set_tid_address, gettid, getppid, getpid, getcwd, exit, exit_group, clock_gettime, arch_prctl.
    • The functionality is limited to common terminal IO, process control and file system operations.
    • Some syscalls are implemented as a noop and/or return errors warning that they are not available, e.g. chown, lchown, fchown, brk, rt_sigaction, sigprocmask, futex.
  • Musl libc
    • The musl libc library behaves differently than the more common glibc in some cases.
  • The stack
    • Accessing the stack pointer directly is not possible.
    • The stack is not contiguous, and accessing memory that is out of the bounds of a stack allocation (e.g., accessing neighboring stack value using pointer arithmetics) is not possible.
  • Pointers into the managed heap
    • Reading parts of a managed pointer is not possible.
    • Overwriting parts of a managed pointer (e.g., using bits for pointer tagging) and subsequently dereferencing the destroyed managed pointer is not possible.
    • Undefined behavior in C pointer arithmetics applies.
    • Complex pointer arithmetics (e.g., multiplying pointers) can convert a managed pointer to an i64 value – the i64 value can be used in pointer comparisons but cannot be dereferenced.
  • Floating point arithmetics
    • 80-bit floating points only use 64-bit floating point precision.
  • Dynamic linking
    • The interaction with the LLVM bitcode dynamic linker is not supported, e.g., dlsym/dlopen cannot be used. This does not allow to load native code.