Why using Kotlin Coroutines?
Why do we need to learn Kotlin Coroutines? On JVM, we already have well-established libraries like RxJava or Reactor. Moreover, Java itself has its support for multithreading. Many people also choose to just use plain old callbacks instead. Clearly, we already have many options for performing asynchronous operations.
Kotlin Coroutines offer much more than that. They are an implementation of a concept that was first described in 19631 but waited years for a proper industry-ready implementation2. It connects powerful capabilities presented by half-century-old papers, with a library that is designed to perfectly help in real-life use cases. What is more, Kotlin coroutines are multiplatform, which means they can be used across all Kotlin platforms (like JVM, JS, iOS, and in the common modules as well). Finally, they do not change the code structure drastically. We can use most Kotlin coroutines capabilities nearly effortlessly (which we cannot say about RxJava or callbacks). This makes them beginner-friendly3.
Let’s see it in practice. We will explore how different common use-cases are solved by this and other common approaches. I separated two typical use-cases: Android and backend business logic implementation.
Coroutines on Android (and other frontend applications)
When you implement application logic on the frontend, what you most often need to do is to:
- get some data from one or many sources (API, view element, database, preferences, another application),
- process these data,
- do something with the data (display it in the view, store it on a database, send it to API).
To make our discussion more practical, let's first assume we are developing an Android application. We will start with a problem, where we need to get news from API, sort them, and display them on the screen. This is a direct representation of what we want our function to do:
Sadly, this cannot be done so easily. If we run this function on the Main thread (the only one capable of updating the view on the application), we would block our entire application. This is why it is illegal on Android to do network operations on the Main thread, and above code would throw an exception. If we run it on another thread, we will not be able to show news on the view, because this can only be done on the Main thread.
We could solve these problems by switching threads two times, as presented in the code below.
Such thread switching can still be found in some applications, but it is known for being problematic for several reasons:
- There is no mechanism here to cancel those threads, so we often face memory leaks.
- Making so many threads is costly.
- Frequently switching threads is confusing and hard to manage.
- The code will unnecessarily get bigger and more complicated.
Considering all these problems, we need to find a better mechanism.
There is one pattern that can help here: Callbacks. When we use that pattern, we pass a function to another function that should be called once the data are ready to be used. The callback function starts a process of obtaining data, and the rest happens in some other thread. Once the data are obtained, our callback gets called. This pattern can help us as follows:
We still might face memory leaks, since we do not cancel threads that are not needed anymore, but at least the callback functions are taking the responsibility of switching threads. Callback architecture solves this simple problem, but it has many downsides. To explore them, let's discuss a more complex case, where we need to get data from three endpoints:
This code is far from perfect for several reasons:
- Getting news and user data might be parallelized, but our current callback architecture doesn’t support that (it would be hard to achieve it with callbacks).
- Callbacks do not support cancellation, and we are dealing with memory leaks.
- More and more indentations make this code hard to read. A code with multiple callbacks is often considered highly unreadable. Such a situation is called "callback hell", and can be found especially in some older Node.JS projects:
- When we use callbacks, it is hard to control what happens after what. The following way to show progress indicator will not work:
The progress bar will be hidden just after starting the process of showing news, so practically immediately after it has been shown. To make it work, we would need to pass hiding the progress bar as a callback.
That's why the callback architecture is far from perfect for non-trivial projects. Let's take a look at another approach: RxJava & Reactive Streams
RxJava and other reactive streams
An alternative approach that is popular in Java (both in Android and backend) is using reactive streams (or Reactive Extensions): RxJava or its successor Reactor. With this approach, all the operations happen inside a stream of data that can be started, processed, and observed. Those streams support thread-switching and concurrent processing, so they are often used to parallelize processing in our applications.
This is how we might solve our problem using RxJava:
disposablesin the above example are needed to cancel this stream if (for example) the user exits the screen.
This is definitely a better solution than callbacks: no data leaks, cancellation is supported, properly using threads. The only problem is that it is complicated. If you compare it with the "ideal" code from the beginning (also shown below), they have very little in common.
All those functions like
subscribe need to be learned. Cancelling needs to be explicit. Functions need to return objects wrapped inside
Think of the second problem, where we need to call three endpoints before showing data. It can be solved with RxJava properly, but it is even more complicated.
This code is truly concurrent and has no memory leaks, but we need to introduce RxJava functions such as
flatMap, packing a value into
Pair, and destructuring it. This is a good implementation, it is just quite complicated. So finally, let's see what coroutines offer us.
Using Kotlin coroutines
The core functionality that Kotlin coroutines introduce is the ability to suspend a coroutine at some point, and resume it at a later point in the future. Thanks to that, we might run our code on the Main thread and suspend it when we request data from the API. When a coroutine is suspended, the thread is not blocked and is free to go. It can be used to change the view or process other coroutines. Once the data are ready, the coroutine is waiting for the Main thread (it is a rare situation, but there might be a queue of coroutines waiting for it), and once it gets the thread, it can continue from the point where it stopped.
This picture shows functions
updateProfile running on the Main thread on separate coroutines. They can do that interchangeably, because they are suspending their coroutines instead of blocking thread. When function
updateNews is waiting for network response, the Main thread is used by the
updateProfile. Here are assumed that
getUserData did not suspend, because user data were cached already, so it can run until its completion. This wasn't enough time for network response, so the main thread is not used at that time (it can be used by other functions). Once the data appear, we grab the Main thread and use it on the function
updateNews starting from the point straight after
So our first problem might be solved by using Kotlin coroutines in the following way:
This code is nearly identical to what we want since the beginning! In this solution, the code runs on the Main thread, but it is never blocking it. Thanks to the suspension mechanism, we are suspending (instead of blocking) the coroutine, when we need to wait for data. When the coroutine is suspended, the Main thread can go do other things, like drawing a beautiful progress bar animation. Once the data are ready, our coroutine takes the Main thread again and starts from where it previously stopped.
How about the other problem, with three calls? It could be solved similarly:
This is a good-looking solution, but the way it works is not optimal. Those calls will happen sequentially (one after another), so if each of them takes 1 second, the whole function will take 3 seconds, instead of 2 seconds, which we can achieve if the API calls execute in parallel. This is where the Kotlin coroutines library helps us with functions like
async, which can be used to immediately start another coroutine with some request, and wait for its result later (with the
With Kotlin coroutines, we can easily implement different use-cases and use other Kotlin features. For instance, they do not block us from using for-loop or collection processing functions. Below you can see how the next pages might be downloaded in parallel or one after another.
Coroutines on backend
The backend developers do not have a problem with the Main thread, but blocking threads is still not great. Threads are costly. They need to be created, maintained, they need to allocate their memory4. If your application is used by millions of users, and you are blocking whenever you wait for a response from a database or another service, this adds up to a significant cost in memory and in processor use (for creation, maintenance, and synchronization of those threads).
This problem can be visualized with the following snippets, that simulate a backend service with 100 000 users asking for data. The first snippet starts 100 000 threads and makes them sleep for a second (to simulate waiting for a response from a database or other service). If you run it on your computer, you will see it takes a while to print all those dots, or it will break with the
OutOfMemoryError exception. This is the cost of running so many threads. The second snippet uses coroutines instead of threads and suspends them instead of making them sleep. If you run it, the program will wait for a second and then print all the dots. The cost of starting all those coroutines is so cheap that it is barely noticeable.
Kotlin coroutines on backend offer us simplicity. In most cases, we are just using suspending functions on other suspending functions. In such a case, we can just forget we are using coroutines when at the same time they help us by making our code more efficient. When we need to introduce some concurrency, we can do that easily using features like
Flow. The below example shows a suspending function calling another suspending function. When we use coroutines, the only difference is that most functions are marked as with
suspend modifier. The second snippet shows how concurrency can easily be used on suspending functions - we just wrapped a function with
coroutineScope and inside we can freely use coroutine builders like
I hope that you feel convinced to learn more about Kotlin coroutines now. It is much more than just a library, and it makes concurrent programming as easy as we could make it with modern tools. If we have that settled, let's start learning. Through the rest of this part, we will explore how suspension works - first from the usage point of view, then under the hood. In the second part, we will cover the essential concepts and tools offered by the Kotlin Coroutines library (named kotlinx.coroutines). The third part is about Channel and Flow, which are in a way alternative to RxJava or Reactor. Ready? So let's start the adventure.
Conway, Melvin E. (July 1963). "Design of a Separable Transition-diagram Compiler". Communications of the ACM. ACM. 6 (7): 396–408. doi:10.1145/366663.366704. ISSN 0001-0782. S2CID 10559786
Some say that the first industry-ready and universal coroutines were introduced by Go, often called Goroutines. It is worth mentioning that coroutines were also implemented in some older languages, like Lisp, but they haven’t popularized. I believe it is because their implementation wasn’t designed to support real-life cases. Lisp (just like Haskell) was rather treated as a playground for scientists, rather than as a language for professionals.
It does not change the fact that we should understand them to use them well.
Most often, the default size of the thread stack is 1 MB. Due to Java optimizations, it does not necessarily mean 1 MB times the number of threads will be used, but it is still a lot of extra memory spent just because we create threads.