GraalVM Debugging and Monitoring Tools

GraalVM provides a set of tools for developers, integrators, and IT administrators to debug and monitor GraalVM and deployed applications.

Debugger

GraalVM supports debugging of guest language applications and provides a built-in implementation of the Chrome DevTools Protocol. This allows you to attach compatible debuggers such as Chrome Developer Tools to GraalVM.

To debug guest language applications, pass the --inspect option to the command-line launcher, as in the following example with a node.js hello world program:

var http = require('http');

var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello World!\n");
});

server.listen(8000);

console.log("Server running at http://localhost:8000/");
  1. Save this program as HelloWorld.js and then run:
$ node --inspect --jvm HelloWorld.js
Debugger listening on port 9229.
To start debugging, open the following URL in Chrome:
    chrome-devtools://devtools/bundled/js_app.html?ws=127.0.1.1:9229/76fcb6dd-35267eb09c3
Server running at http://localhost:8000/
  1. Navigate to http://localhost:8000/ in your browser to launch the node application.

  2. Open the chrome-devtools:... link in a separate Chrome browser tab.

  3. Navigate to the HelloWorld.js file and submit a breakpoint at line 4.

  4. Refresh the node.js app and you can see the breakpoint hit.

You can inspect the stack, variables, evaluate variables and selected expressions in a tooltip, and so on. By hovering a mouse over the response variable, for instance, you can inspect its properties as can be seen in the screenshot below:

Consult the JavaScript Debugging Reference for details on Chrome DevTools debugging features.

This debugging process applies to all guest languages that GraalVM supports. Other languages such as R and Ruby can be debugged as easily as JavaScript, including stepping through language boundaries during guest language interoperability.

Inspect Options

Node Launcher

The node.js implementation that GraalVM provides accepts the same options as node.js built on the V8 JavaScript engine, such as:

--inspect[=[host:]<port number>]

Enables the inspector agent and listens on port 9229 by default. To listen on a different port, specify the optional port number.

--inspect-brk[=[host:]<port number>]

Enables the inspector agent and suspends on the first line of the application code. Listens on port 9229 by default, to listen on a different port, specify the optional port number. This applies to the node launcher only.

Other Language Launchers

Other guest language launchers such as js, python, Rscript, ruby, lli and polyglot accept the --inspect[=[host:]<port number>] option, but suspend on the first line of the application code by default.

--inspect.Suspend=(true|false)

Disables the initial suspension if you specify --inspect.Suspend=false.

Additional Common Inspect Options

All launchers accept also following additional options:

--inspect.Path=<path>

Allows to specify a fixed path that generates a predictable connection URL. By default, the path is randomly generated.

--inspect.SourcePath=<source path>

This option specifies a list of directories or ZIP/JAR files representing the source path. When the inspected application contains relative references to source files, their content is loaded from locations resolved with respect to this source path. It is useful during LLVM debugging, for instance. The paths are delimited by : on UNIX systems and by ; on MS Windows.

--inspect.Secure=(true|false)

When true, use TLS/SSL to secure the debugging protocol. Besides changing the WS (web socket) protocol to WSS, the HTTP endpoint that serves metadata about the debuggee is also changed to HTTPS. This is not compatible e.g. with chrome://inspect page, which is not able to provide the debuggee information and launch the debugger then. Launch debugging via the printed WSS URL directly.

Use the standard javax.net.ssl.* system options to provide information about keystore with the TLS/SSL encryption keys, or following options:

  • --inspect.KeyStore keystore file path,
  • --inspect.KeyStoreType keystore file type (defaults to JKS),
  • --inspect.KeyStorePassword keystore password,
  • --inspect.KeyPassword password for recovering keys, if it’s different from the keystore password.
--inspect.WaitAttached=(true|false)

When true, no guest language source code is executed until the inspector client is attached. Unlike --inspect.Suspend=true, the execution is resumed right after the client is attached. That assures that no execution is missed by the inspector client. It is false by default.

Advanced Debug Options

Following options are for language experts and language developers:

--inspect.Initialization=(true|false)

When true, inspect the language initialization phase. When initial suspension is active, suspends at the beginning of language initialization and not necessarily at the beginning of the application code. It’s false by default.

--inspect.Internal=(true|false)

When true, internal sources are inspected as well. Internal sources may provide language implementation details. It’s false by default.

Programmatic Launch of Inspector Backend

Embedders can provide the appropriate inspector options to the Engine/Context to launch the inspector backend. The following code snippet provides an example of a possible launch:

String port = "4242";
String path = "session-identifier";
String remoteConnect = "true";
Context context = Context.newBuilder("js")
            .option("inspect", port)
            .option("inspect.Path", path)
            .option("inspect.Remote", remoteConnect)
            .build();
String hostAdress = "localhost";
String url = String.format(
            "chrome-devtools://devtools/bundled/js_app.html?ws=%s:%s/%s",
            hostAdress, port, path);
// Chrome Inspector client can be attached by opening the above url in Chrome

Profiler

GraalVM provides Profiling command line tools that let you optimize your code through analysis of CPU and memory usage.

Most applications spend 80 percent of their runtime in 20 percent of the code. For this reason, to optimize the code, it is essential to know where the application spends its time. GraalVM provides simple command line tools for runtime and memory profiling to help you analyze and optimize your code.

In this section, we use an example application to demonstrate the profiling capabilities that GraalVM offers. This example application uses a basic prime number calculator based on the ancient Sieve of Eratosthenes algorithm.

  1. Copy the following code into a new file named primes.js:

     class AcceptFilter {
         accept(n) {
             return true
         }
     }
    
     class DivisibleByFilter {
         constructor(number, next) {
             this.number = number;
             this.next = next;
         }
    
         accept(n) {
             var filter = this;
             while (filter != null) {
                 if (n % filter.number === 0) {
                     return false;
                 }
                 filter = filter.next;
             }
             return true;
         }
     }
    
     class Primes {
         constructor() {
             this.number = 2;
             this.filter = new AcceptFilter();
         }
    
         next() {
             while (!this.filter.accept(this.number)) {
                 this.number++;
             }
             this.filter = new DivisibleByFilter(this.number, this.filter);
             return this.number;
         }
     }
    
     var primes = new Primes();
     var primesArray = [];
     for (let i = 0; i < 5000; i++) {
         primesArray.push(primes.next());
     }
     console.log(`Computed ${primesArray.length} prime numbers. ` +
                 `The last 5 are ${primesArray.slice(-5)}.`);
    
  2. Run js primes.js.

    The example application should print output as follows:

     $> js primes.js
     Computed 5000 prime numbers. The last 5 are 48563,48571,48589,48593,48611.
    
    

    This code takes a moment to compute so let’s see where all the time is spent.

  3. Run js primes.js --cpusampler to enable CPU sampling.

    The CPU sampler tool should print output for the example application as follows:

     $ ./js primes.js --cpusampler
     Computed 5000 prime numbers. The last 5 are 48563,48571,48589,48593,48611.
     ---------------------------------------------------------------------------------------------------
     Sampling Histogram. Recorded 1184 samples with period 1ms
       Self Time: Time spent on the top of the stack.
       Total Time: Time the location spent on the stack.
       Opt %: Percent of time spent in compiled and therfore non-interpreted code.
     ---------------------------------------------------------------------------------------------------
      Name        |      Total Time     |  Opt % ||       Self Time     |  Opt % | Location
     ---------------------------------------------------------------------------------------------------
      next        |       1216ms  98.5% |  87.9% ||       1063ms  85.9% |  99.0% | primes.js~31-37:564-770
      accept      |        159ms  11.2% |  22.7% ||        155ms  12.5% |  14.8% | primes.js~13-22:202-439
      :program    |       1233ms 100.0% |   0.0% ||         18ms   1.5% |   0.0% | primes.js~1-47:0-1024
      constructor |          1ms   0.1% |   0.0% ||          1ms   0.1% |   0.0% | primes.js~7-23:72-442
     ---------------------------------------------------------------------------------------------------
    
    

    The sampler prints an execution time histogram for each JavaScript function. By default, CPU sampling takes a sample every single millisecond. From the result we can see that roughly 96 percent of the time is spent in the DivisibleByFilter.accept function.

     accept(n) {
         var filter = this;
         while (filter != null) {
             if (n % filter.number === 0) {
                 return false;
             }
             filter = filter.next;
         }
         return true;
     }
    
    

    Now find out more about this function by filtering the samples and include statements in the profile in addition to methods.

  4. Run js primes.js --cpusampler --cpusampler.Mode=statements --cpusampler.FilterRootName=*accept to collect statement samples for all functions that end with accept.

     $ js primes.js --cpusampler --cpusampler.Mode=statements --cpusampler.FilterRootName=*accept
     Computed 5000 prime numbers. The last 5 are 48563,48571,48589,48593,48611.
     ----------------------------------------------------------------------------------------------------
     Sampling Histogram. Recorded 1567 samples with period 1ms
       Self Time: Time spent on the top of the stack.
       Total Time: Time the location spent on the stack.
       Opt %: Percent of time spent in compiled and therfore non-interpreted code.
     ----------------------------------------------------------------------------------------------------
      Name         |      Total Time     |  Opt % ||       Self Time     |  Opt % | Location
     ----------------------------------------------------------------------------------------------------
      accept~16-18 |        436ms  27.8% |  94.3% ||        435ms  27.8% |  94.5% | primes.js~16-18:275-348
      accept~15    |        432ms  27.6% |  97.0% ||        432ms  27.6% |  97.0% | primes.js~15:245-258
      accept~19    |        355ms  22.7% |  95.5% ||        355ms  22.7% |  95.5% | primes.js~19:362-381
      accept~17    |          1ms   0.1% |   0.0% ||          1ms   0.1% |   0.0% | primes.js~17:322-334
     ----------------------------------------------------------------------------------------------------
    
    

    Roughly 30 percent of the time is spent in this if condition:

     if (n % filter.number === 0) {
         return false;
     }
    
    

    The if condition contains an expensive modulo operation, which might explain the runtime of the statement.

    Now use the CPU tracer tool to collect execution counts of each statement.

  5. Run js primes.js --cputracer --cputracer.TraceStatements --cputracer.FilterRootName=*accept to collect execution counts for all statements in methods ending with accept.

     $ js primes.js --cputracer --cputracer.TraceStatements --cputracer.FilterRootName=*accept
     Computed 5000 prime numbers. The last 5 are 48563,48571,48589,48593,48611.
     -----------------------------------------------------------------------------------------
     Tracing Histogram. Counted a total of 351278226 element executions.
       Total Count: Number of times the element was executed and percentage of total executions.
       Interpreted Count: Number of times the element was interpreted and percentage of total executions of this element.
       Compiled Count: Number of times the compiled element was executed and percentage of total executions of this element.
     -----------------------------------------------------------------------------------------
      Name     |          Total Count |    Interpreted Count |       Compiled Count | Location
     -----------------------------------------------------------------------------------------
      accept   |     117058669  33.3% |         63575   0.1% |     116995094  99.9% | primes.js~15:245-258
      accept   |     117053670  33.3% |         63422   0.1% |     116990248  99.9% | primes.js~16-18:275-348
      accept   |     117005061  33.3% |         61718   0.1% |     116943343  99.9% | primes.js~19:362-381
      accept   |         53608   0.0% |          1857   3.5% |         51751  96.5% | primes.js~14:215-227
      accept   |         53608   0.0% |          1857   3.5% |         51751  96.5% | primes.js~13-22:191-419
      accept   |         48609   0.0% |          1704   3.5% |         46905  96.5% | primes.js~17:322-334
      accept   |          4999   0.0% |           153   3.1% |          4846  96.9% | primes.js~21:409-412
      accept   |             1   0.0% |             1 100.0% |             0   0.0% | primes.js~2-4:25-61
      accept   |             1   0.0% |             1 100.0% |             0   0.0% | primes.js~3:52-55
     -----------------------------------------------------------------------------------------
    
    

    Now the output shows execution counters for each statement, instead of timing information. Tracing histograms often provides insights into the behavior of the algorithm that needs optimization.

    Lastly, use the memory tracer tool for capturing allocations, for which GraalVM currently provides experimental support. Node, --memtracer as an experimental tool must be preceded by the --experimental-options command line option.

  6. Run js primes.js --experimental-options --memtracer to display source code locations and counts of reported allocations.

     $ js primes.js --experimental-options --memtracer
     Computed 5000 prime numbers. The last 5 are 48563,48571,48589,48593,48611.
     ------------------------------------------------------------
      Location Histogram with Allocation Counts. Recorded a total of 5013 allocations.
        Total Count: Number of allocations during the execution of this element.
        Self Count: Number of allocations in this element alone (excluding sub calls).
     ------------------------------------------------------------
      Name        |      Self Count |     Total Count |  Location
     ------------------------------------------------------------
      next        |     5000  99.7% |     5000  99.7% | primes.js~31-37:537-737
      :program    |       11   0.2% |     5013 100.0% | primes.js~1-46:0-966
      Primes      |        1   0.0% |        1   0.0% | primes.js~25-38:454-739
     ------------------------------------------------------------
    
    

    This output shows the number of allocations which were recorded per function. For each prime number that was computed, the program allocates one object in next and one in constructor of DivisibleByFilter. Allocations are recorded independently of whether they could get eliminated by the compiler. The Graal compiler is particularly powerful in optimizing allocations and can push allocations into infrequent branches to increase execution performance. The GraalVM team plans to add information about memory optimizations to the memory tracer in the future.

Tool Reference

Use the --help:tools option in all guest language launchers to display reference information for the CPU sampler, the CPU tracer, and the memory tracer.

The current set of available options is as follows:

CPU Sampler Command Options

  • --cpusampler: enables the CPU sampler. Disabled by default.
  • --cpusampler.Delay=<Long>: delays the sampling for the given number of milliseconds (default: 0).
  • --cpusampler.FilterFile=<Expression>: applies a wildcard filter for source file paths. For example, *program*.sl. The default is ∗.
  • --cpusampler.FilterLanguage=<String>: profiles languages only with the matching mime-type. For example, +. The default is no filter.
  • --cpusampler.FilterRootName=<Expression>: applies a wildcard filter for program roots. For example, Math.*. The default is ∗.
  • --cpusampler.GatherHitTimes: saves a timestamp for each taken sample. The default is false.
  • --cpusampler.Mode=<Mode>: describes level of sampling detail. Please note that increased detail can lead to reduced accuracy.
    • exclude_inlined_roots samples roots excluding inlined functions (enabled by default);
    • rootssamples roots including inlined functions;
    • statements samples all statements.
  • --cpusampler.Output=<Output>: prints a ‘histogram’ or ‘calltree’ as output. The default is ‘histogram’.
  • --cpusampler.Period=<Long>: specifies the period, in milliseconds, to sample the stack.
  • --cpusampler.SampleInternal: captures internal elements. The default is false.
  • --cpusampler.StackLimit=<Integer>: specifies the maximum number of stack elements.
  • --cpusampler.SummariseThreads : prints sampling output as a summary of all ‘per thread’ profiles. The default is false.

CPU Tracer Command Options

  • --cputracer: enables the CPU tracer. Disabled by default.
  • --cputracer.FilterFile=<Expression>: applies a wildcard filter for source file paths. For example, *program*.sl. The default is ∗.
  • --cputracer.FilterLanguage=<String>: profiles languages only with the matching mime-type. For example, +. The default is no filter.
  • --cputracer.FilterRootName=<Expression>: applies a wildcard filter for program roots. For example, Math.*. The default is ∗.
  • --cputracer.Output=<Output> prints a histogram or json as output. The default is histogram.
  • --cputracer.TraceCalls: captures calls when tracing. The default is false.
  • --cputracer.TraceInternal: traces internal elements. The default is false.
  • --cputracer.TraceRoots=<Boolean>: captures roots when tracing. The default is true.
  • --cputracer.TraceStatements: captures statements when tracing. The default is false.

Memory Tracer Command Options

Warning: The memory tracer tool is experimental. Experimental features might never be included in a production version, or might change significantly before being considered production-ready. Make sure to prepend --experimental-options flag to enable --memtracer.

  • --experimental-options --memtracer: enables the memory tracer. Disabled by default.
  • --memtracer.FilterFile=<Expression>: applies a wildcard filter for source file paths. For example, *program*.sl. The default is ∗.
  • --memtracer.FilterLanguage=<String>: profiles languages only with the matching mime-type. For example, +. The default is no filter.
  • --memtracer.FilterRootName=<Expression>: applies a wildcard filter for program roots. For example, Math.*. The default is ∗.
  • --memtracer.Output=<Format>: prints a ‘typehistogram’, ‘histogram’, or ‘calltree’ as output. The default is ‘histogram’.
  • --memtracer.StackLimit=<Integer>: sets the maximum number of maximum stack elements.
  • --memtracer.TraceCalls: captures calls when tracing. The default is false.
  • --memtracer.TraceInternal: captures internal elements. The default is false.
  • --memtracer.TraceRoots=<Boolean>: captures roots when tracing. The default is true.
  • --memtracer.TraceStatements: captures statements when tracing. The default is false.

Ideal Graph Visualizer

Ideal Graph Visualizer or IGV is a developer tool, currently maintained as part of the GraalVM compiler, recommended for performance issues investigation.

The tool is essential for any language implementers building on top of GraalVM Enterprise Edition. It is available as a separate download on Oracle Technology Network and requires accepting the Oracle Technology Network Developer License.

Ideal Graph Visualizer is developed to view and inspect interim graph representations from GraalVM and Truffle compilations.

1.Unzip the downloaded package and enter bin directory:

$ cd idealgraphvisualizer/bin

2.Launch the tool:

$ idealgraphvisualizer

3.Save the following code snippet as Test.rb:

require 'json'

obj = {
  time: Time.now,
  msg: 'Hello World',
  payload: (1..10).to_a
}

encoded = JSON.dump(obj)

js_obj = Polyglot.eval('js', 'JSON.parse').call(encoded)

puts js_obj[:time]
puts js_obj[:msg]
puts js_obj[:payload].join(' ')

4.From another console window, make sure ruby component is installed in GraalVM, and connect Test.rb script to the running IGV:

$ gu list
$ ruby --jvm --vm.Dgraal.Dump=:1 --vm.Dgraal.PrintGraph=Network Test.rb

This causes GraalVM to dump compiler graphs in IGV format over the network to an IGV process listening on 127.0.0.1:4445. Once the connection is made, you are able to see the graphs in the Outline window. Find e.g. java.lang.String.char(int) folder and open its After parsing graph by double-clicking. If the node has sourceNodePosition property, then the Processing Window will attempt to display its location and the entire stacktrace.

Browsing Graphs

Once a specific graph is opened, you can search for nodes by name, ID, or by property=value data, and all matching results will be shown. Another cool feature of this tool is the ability to navigate to the original guest language source code! Select a node in graph and press ‘go to source’ button in the Stack View window.

Graphs navigation is available also from the context menu, enabled by focusing and right-clicking a specific graph node. Extract nodes option will re-render a graph and display just selected nodes and their neighbours.

If the graph is larger than the screen, manipulate with the ‘satellite’ view button in the main toolbar to move the viewport rectangle.

For user preference, the graph color scheme is adjustable by editing the Coloring filter, enabled by default in the left sidebar.

Viewing Source Code

Source code views can be opened in manual and assisted modes. Once you select a node in the graph view, the Processing View will open. If IGV knows where the source code for the current frame is, the green ‘go to source’ arrow is enabled. If IGV does not know where the source is, the line is grayed out and a ‘looking glass’ button appears.

Press it and select “Locate in Java project” to locate the correct project in the dialog. IGV hides projects which do not contain the required source file. The “Source Collections” serves to display the stand alone roots added by “Add root of sources” general action. If the source is located using the preferred method (i.e., from a Java project), its project can be later managed on the Project tab. That one is initially hidden, but you can display the list of opened projects using Window - Projects.

Dumping Graphs

The IGV tool is developed to allow GraalVM language implementers to optimize their languages assembled with the Truffle framework. As a development tool it should not be installed to production environments.

To dump the GraalVM compiler graphs from an embedded Java application to IGV, you need to add options to GraalVM based processes. Depending on the language/VM used, you may need to prefix the options by --vm. See the particular language’s documentation for the details. The main option to add is -Dgraal.Dump=:1. This will dump graphs in an IGV readable format to the local file system. To send the dumps directly to IGV over the network, add -Dgraal.PrintGraph=Network when starting a GraalVM instance. Optionally a port can be specified. Then dumps are sent to IGV from the running GraalVM on localhost. If IGV does not listen on localhost, options “Ideal Graph Settings| Accept Data from network” can be checked. If there is not an IGV instance listening on 127.0.0.1 or it cannot be connected to, the dumps will be redirected to the local file system. The file system location is graal_dumps/ under the current working directory of the process and can be changed with the -Dgraal.DumpPath option.

In case an older GraalVM is used, you may need to explicitly request that dumps include the nodeSourcePosition property. This is done by adding the -XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints options.

GraalVM VisualVM

GraalVM comes with GraalVM VisualVM, an enhanced version of the popular VisualVM tool which includes special heap analysis features for the supported guest languages. These languages and features are currently available:

  • Java: Heap Summary, Objects View, Threads View, OQL Console
  • JavaScript: Heap Summary, Objects View, Thread View
  • Python: Heap Summary, Objects View
  • Ruby: Heap Summary, Objects View, Threads View
  • R: Heap Summary, Objects View

Starting GraalVM VisualVM

To start GraalVM VisualVM execute jvisualvm. Immediately after the startup, the tool shows all locally running Java processes in the Applications area, including the VisualVM process itself.

Important: GraalVM Native Image does not implement JVMTI agent, hence triggering heap dump creation from Applications area is impossible. Apply -H:+AllowVMInspection flag with the native-image tool for Native Image processes. This way your application will handle signals and get a heap dump when it receives SIGUSR1 signal. Guest language REPL process must be started also with the --jvm flag to monitor it using GraalVM VisualVM. This functionality is available with GraalVM Enterprise Edition. It is not available in GraalVM open source version available on GitHub. See the Generating Native Heap Dumps page for details on creating heap dumps from a native image process.

Getting Heap Dump

To get a heap dump of, for example, a Ruby application for later analysis, first start your application, and let it run for a few seconds to warm up. Then right-click its process in GraalVM VisualVM and invoke the Heap Dump action. A new heap viewer for the Ruby process opens.

Analyzing Objects

Initially the Summary view for the Java heap is displayed. To analyze the Ruby heap, click the leftmost (Summary) dropdown in the heap viewer toolbar, choose the Ruby Heap scope and select the Objects view. Now the heap viewer displays all Ruby heap objects, aggregated by their type.

Expand the Proc node in the results view to see a list of objects of this type. Each object displays its logical value as provided by the underlying implementation. Expand the objects to access their variables and references, where available.

Now enable the Preview, Variables and References details by clicking the buttons in the toolbar and select the individual ProcType objects. Where available, the Preview view shows the corresponding source fragment, the Variables view shows variables of the object and References view shows objects referring to the selected object.

Last, use the Presets dropdown in the heap viewer toolbar to switch the view from All Objects to Dominators or GC Roots. To display the heap dominators, retained sizes must be computed first, which can take a few minutes for the server.rb example. Select the Objects aggregation in the toolbar to view the individual dominators or GC roots.

Analyzing Threads

Click the leftmost dropdown in the heap viewer toolbar and select the Threads view for the Ruby heap. The heap viewer now displays the Ruby thread stack trace, including local objects. The stack trace can alternatively be displayed textually by clicking the HTML toolbar button.

Reading JFR Snapshots

VisualVM tool bundled with GraalVM 19.2.0 in both Community and Enterprise editions has the ability to read JFR snapshots – snapshots taken with JDK Flight Recorder (previously Java Flight Recorder). JFR is a tool for collecting diagnostic and profiling data about a running Java application. It is integrated into the Java Virtual Machine (JVM) and causes almost no performance overhead, so it can be used even in heavily loaded production environments.

To install the JFR support, released as a plugin:

  1. run <GRAALVM_HOME>/bin/jvisualvm to start VisualVM;
  2. navigate to Tools > Plugins > Available Plugins to list all available plugins and install the VisualVM-JFR and VisualVM-JFR-Generic modules.

The JFR snapshots can be opened using either the File > Load action or by double-clicking the JFR Snapshots node and adding the snapshot into the JFR repository permanently. Please follow the documentation for your Java version to create JFR snapshots.

The JFR viewer reads all JFR snapshots created from Java 7 and newer and presents the data in typical VisualVM views familiar to the tool users.

These views and functionality are currently available:

  • Overview tab displays the basic information about the recorded process like its main class, arguments, JVM version and configuration, and system properties. This tab also provides access to the recorded thread dumps.
  • Monitor tab shows the process uptime and basic telemetry – CPU usage, Heap and Metaspace utilization, number of loaded classes and number of live & started threads.
  • Threads tab reconstructs the threads timeline based on all events recorded in the snapshot as precisely as possible, based on the recording configuration.
  • Locks tab allows to analyze threads synchronization.
  • File IO tab presents information on read and write events to the filesystem.
  • Socket IO tab presents information on read and write events to the network.
  • Sampler tab shows per-thread CPU utilization and memory allocations, and a heap histogram. There is also an experimental feature “CPU sampler” building CPU snapshot from the recorded events. It does not provide an exact performance analysis but still helps to understand what was going on in the recorded application and where the CPU bottleneck might be.
  • Browser tab provides a generic browser of all events recorded in the snapshot.
  • Environment tab gives an overview of the recording machine setup and condition like CPU model, memory size, operating system version, CPU utilization, memory usage, etc..
  • Recording tab lists the recording settings and basic snapshot telemetry like number of events, total recording time, etc..

Warning: The support of JDK Flight Recorder is experimental. Experimental features might never be included in a production version, or might change significantly before being considered production-ready. Some advanced features like analyzing JVM internals, showing event stack traces or support for creating JFR snapshots from live processes are not available in this preview version and will be addressed incrementally in the following releases.