Discover the Benefits of Virtual Threads in Java
Discover how Virtual Threads in Java revolutionize concurrency and parallelism, optimizing the performance of modern applications.
I believe many developers, like myself, have found themselves revisiting fundamental software development concepts after finishing university. Whether it’s for job interviews, improving performance at work, or simply to refresh important topics that we often overlook in the daily routine, we sometimes forget these fundamentals.
Often, a new feature in a programming language, like Virtual Threads in Java, pushes us to revisit these topics. That’s why I decided to write this article, based on my own experience revisiting fundamental topics such as:
- Concurrency
- Parallelism
- Processes and Threads
- And finally, diving into the world of Virtual Threads, introduced in Java, starting from version 21.
These subjects are vast and dense, so this article serves as a brief summary. If you, the reader, want to dive deeper, I will provide sources and suggestions at the end for further learning on the topics mentioned.
Most of this article is based on the lecture by the great Java Champion Elder Moraes, and a good portion is also based on an excellent article by Hugo Marques from Netflix.
Concurrency and Parallelism
To simplify definitions, Concurrency refers to the ability of an operating system to handle multiple tasks at the same time, competing for the machine’s resources. This doesn’t necessarily mean that these tasks are running simultaneously, but rather that the operating system can quickly switch between tasks, making it seem like they’re running simultaneously.
Example of Concurrency:
Imagine an operating system running a web browser, a text editor, and a music player simultaneously. The system switches rapidly between these applications, allowing you to listen to music while typing a document and browsing the web, without these tasks actually running simultaneously.
On the other hand, Parallelism involves executing multiple tasks at the same time. This requires multiple CPU cores, where each core can execute a different task simultaneously, or split a task into smaller ones that use multiple resources.
Example of Parallelism:
Suppose you’re processing images where each image is processed by a different CPU core. If you have four cores, four images can be processed at the same time, with each core handling its own image, increasing efficiency.
- Performance Measure: Some literature measures concurrency performance by throughput (tasks per unit of time), while parallelism is measured by latency (time).
Processes and Threads
Processes are instances of running programs. Each process operates independently, with its own memory space and resources allocated by the operating system. Since processes are isolated, they cannot access each other’s memory directly.
Threads, on the other hand, are the smallest unit of execution within a process. Multiple threads can exist within a single process, sharing the same memory and resources. This shared environment allows threads to communicate more easily compared to processes but also requires careful management to avoid issues like race conditions and deadlocks.
Differences Between Processes and Threads:
- Isolation: Processes are isolated from each other, while threads within the same process share memory and resources.
- Resource Management: Creating and managing processes is more costly in terms of resources than managing threads, due to the need for separate memory spaces and resources.
- Communication: Threads can communicate more easily within the same process, while processes require inter-process communication (IPC) mechanisms.
If threads share memory and resources, they compete for these resources. In the traditional thread model (Platform Threads), also known as thread-per-request, we can use Little’s Law to measure concurrency performance:
- Concurrency = Threads x Time/Thread.
Where concurrency equals throughput.
Therefore, we can conclude two things:
- The more threads a system can execute, the higher the throughput.
- The more powerful the hardware or environment, the more threads can be executed.
Thus, in a hypothetical example, the number of concurrent threads would be the maximum number of threads possible.
Threads in Java
When we talk about threads in Java, we’re referring to the traditional model known as Platform Threads or OS Threads, which have been used in Java for over 20 years. We use the java.lang.Thread
package, which is essentially a wrapper around operating system threads.
While creating and managing threads in Java is relatively simple, handling a large number of threads can become cumbersome and resource-intensive.
Example of Traditional Thread in Java:
public class TraditionalThreadExample {
public static void main(String[] args) {
Thread thread = new Thread(() ->
System.out.println("Running on a traditional thread!");
);
thread.start();
try {
thread.join(); // Wait for the thread to finish
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Operating system threads are costly and require significant memory. This often leads to underutilized resources since, in modern operating systems, not all resources are allocated solely to Java.
In conclusion, Platform Threads are expensive, resource-intensive, and limited. To solve some of these problems, many developers turned to asynchronous programming, which indeed solves some issues, including resource underutilization. With asynchronous programming, we can send more requests than the available number of threads (when there’s an I/O wait, the thread returns to the pool).
However, this also introduces new problems:
- Non-traditional programming (new learning curve)
- Specific sets of APIs (another learning curve)
- And a much greater impact on system observability.
Virtual Threads in Java
Based on the previous paragraph, we can conclude that the best solution for improved throughput isn’t adopting a new programming style but making threads a cheaper resource. Or better yet, making threads no longer a resource at all!
This is where Virtual Threads come in.
We use the same package:
java.lang.Thread
Even though we use the same API, we no longer use operating system platform threads. The Java runtime itself handles the allocation and management of Virtual Threads.
And it may seem obvious, but there’s no operating system that manages a thread in Java better than the JVM itself. The JVM is the environment that best knows how to execute Java code! This is why Virtual Threads perform so well compared to traditional threads.
A common example is that virtual threads can work with “resizable stacks” (stacks whose size can be adjusted). Since they operate within the JVM, the “Java environment” allows resizable stacks. Meanwhile, Platform Threads need a “predicted” stack size from the operating system, often resulting in stack overflow errors.
Example of the same code using Virtual Threads:
public class VirtualThreadExample {
public static void main(String[] args) {
Thread virtualThread = Thread.ofVirtual().start(
() -> System.out.println("Running on a virtual thread!")
);
try {
virtualThread.join(); // Wait for the thread to finish
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Diving Deeper into Virtual Threads
Although almost any Java code can be executed using Virtual Threads, we need to follow certain best practices to fully leverage their benefits.
The first step is to go back to making good use of blocking APIs.
Example of Non-blocking API (CompletableFuture - Unblocking):
Here, asynchronous calls are chained to fetch a webpage and its associated image without blocking the main thread.
CompletableFuture.supplyAsync(() -> getBody(info.getUrl(), HttpResponse.BodyHandlers.ofString()))
.thenCompose(page -> CompletableFuture.supplyAsync(() -> info.findImage(page)))
.thenCompose(imageUrl -> CompletableFuture.supplyAsync(() -> getBody(imageUrl, HttpResponse.BodyHandlers.ofByteArray())))
.thenApply(data -> {
info.setImageData(data);
return info;
})
.thenAcceptAsync(info -> process(info))
.exceptionally(t -> {
t.printStackTrace();
return null;
});
Example of Blocking API (Traditional Blocking Code):
This example works better with virtual threads since they are lightweight and can efficiently handle many blocking operations.
try {
// Blocking call to fetch page content
String page = getBody(info.getUrl(), HttpResponse.BodyHandlers.ofString());
// Find the image URL on the page
String imageUrl = info.findImage(page);
// Blocking call to fetch image data
byte[] data = getBody(imageUrl, HttpResponse.BodyHandlers.ofByteArray());
// Set the image data
info.setImageData(data);
// Process the information
process(info);
} catch (Exception e) {
e.printStackTrace();
}
Another important point: simply transforming an OS thread into a Virtual Thread doesn’t provide gains. What brings benefits is using threads per task.
Thread Pools vs. Thread per Task
A common mistake when working with virtual threads is simply transforming an OS thread into a virtual thread without changing the execution model. This doesn’t yield benefits, as mentioned earlier. The true advantage of virtual threads comes when you adopt the “one thread per task” pattern.
Thread Pools = Few Threads
In the traditional model, we often use thread pools to save resources since creating many OS threads consumes a lot of memory and management resources. With virtual threads, this concept can be reconsidered, as they are lightweight and efficiently managed by the JVM.
Thread per Task = Many Threads
Instead of using a thread
pool to manage a limited number of threads, the ideal approach with virtual threads is to create a new thread for each task. This allows your application to be scalable and make the most of parallelism without overloading the system. Each task, whether a Runnable
or Callable
instance, can be executed in its own virtual thread.
TimerTask task1 = new MyTimerTask("task1"); // Several tasks happening in an isolated scenario
TimerTask task2 = new MyTimerTask("task2"); // Several tasks happening in an isolated scenario
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
executor.submit(task1); // Each task in its own virtual thread
executor.submit(task2);
}
Explanation:
Task as a Work Unit:
In the example above, each task (task1 and task2) is encapsulated within a TimerTask
, representing a unit of work. These tasks are submitted to an executor, which creates a new virtual thread for each task.
Do Not Share Virtual Threads: Unlike OS threads, virtual threads are designed not to be reused. They are lightweight enough that the best practice is to create a new virtual thread for each task rather than sharing one among multiple tasks.
Virtual Threads as Business Logic Objects: Here, virtual threads are no longer seen as a limited resource. They become part of your system’s execution logic, allowing each task to run independently in its own thread.
Beware of Pinning
This problem arises when:
- An operation takes too long to complete (often due to blocking I/O operations);
- The code runs within a
synchronized
block or another synchronization mechanism; - And this operation is executed very frequently.
For pinning to occur, these three factors must be present. When all three occur, the Virtual Thread can become “pinned”, affecting scalability and efficiency.
Solution to Avoid Pinning:
Using ReentrantLock instead of synchronized
blocks can help avoid this problem by allowing the lock to be more flexible and preventing direct association with a carrier thread.
Lock lock = new ReentrantLock();
lock.lock();
try {
somethingLongSynchronizedAndFrequent(); // Long and frequent synchronized operation
} finally {
lock.unlock();
}
In future versions, we won’t need the ReentrantLock
interface to avoid this issue.
Limiting Concurrency with Virtual Threads
Even with the lightweight nature of virtual threads, it is sometimes necessary to control simultaneous access to limited resources, such as database connections. For this, we use a semaphore, which allows you to manage how many threads can access a resource at once.
Example with Semaphore:
Semaphore sem = new Semaphore(10); // Allows 10 simultaneous accesses
sem.acquire();
try {
return somethingVeryLimited(); // Resource with limited access
} finally {
sem.release(); // Releases the access
}
How It Works:
The Semaphore limits the number of threads that can access a resource. In the example, up to 10 virtual threads can access the resource simultaneously. Threads wait for a permission (sem.acquire()
) before accessing the resource and release the permission (sem.release()
) when finished, ensuring the access limit is respected.
Using Virtual Threads with Quarkus
With virtual thread support in Quarkus, you can easily take advantage of increased scalability and reduced resource consumption when dealing with blocking tasks such as I/O. The annotation @RunOnVirtualThread allows a class or method to run on a virtual thread, making the code more efficient without significant changes.
Example of Usage in a Resource Class:
import io.smallrye.common.annotation.RunOnVirtualThread;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
@Path("/example")
@RunOnVirtualThread // All requests here will be executed on virtual threads
public class ExampleResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String getExample() {
// Code here will be executed on a virtual thread
return "Hello from Virtual Thread!";
}
}
How It Works:
By adding @RunOnVirtualThread to the ExampleResource
class, all incoming HTTP requests will automatically be handled by virtual threads. This improves performance in scenarios that involve blocking operations, such as external API calls or database interactions.
You can also use this annotation on specific methods if you don’t want the entire class to use virtual threads.
Conclusion and Benefits:
- Scalability: Virtual threads enable handling thousands of simultaneous requests efficiently.
- Simplicity: With the @RunOnVirtualThread annotation, you enable the use of virtual threads simply, without restructuring your code.
Virtual threads in Java represent a significant advancement for concurrency, allowing for much more efficient use of hardware resources by allocating a large number of lightweight threads with just a few traditional threads.
The best part is that there is no need to learn new APIs or change the programming model, we continue using the familiar java.lang.Thread
, making adoption easy.
Additionally, we avoid the complexity of asynchronous programming, while maintaining ease of debugging and monitoring.
With virtual threads, Java now offers a powerful solution for scalability without sacrificing simplicity. In future articles, I will explore more about using these threads with Quarkus, maximizing their potential in modern applications.
Main Sources for This Article:
Most of this article is based on a lecture by the great Java Champion Elder Moraes
Virtual Threads: What they are, what they’re for, and why every Java developer should care
https://www.youtube.com/watch?v=vXnuCKKRtSQThe section on Concurrency and Parallelism is based on an excellent article by Hugo Marques from Netflix
https://dev.to/hugaomarques/paralelismo-e-concorrencia-101-2pgcJEP 425 (Preview in JDK 19):
https://openjdk.org/jeps/425Project Loom:
https://cr.openjdk.java.net/~rpressler/loom/Loom-Proposal.htmlVirtual Threads: New Foundations for High-Scale Java Applications (Brian Goetz, Daniel Briant):
https://www.infoq.com/articles/java-virtual-threads/Writing simpler reactive REST services with Quarkus Virtual Thread support:
https://quarkus.io/guides/virtual-threadsThe Age of Virtual Threads (Ron Pressler, Alan Bateman):
https://youtu.be/YQ6EpIk7KgY