Why using Kotlin Coroutines?

This is a chapter from the book Kotlin Coroutines. You can find Early Access on LeanPub.

Why do we need to learn Kotlin Coroutines? On JVM, we already have well-established libraries like RxJava or Reactor. Moreover, Java itself has its support for multithreading. Many people also choose to just use plain old callbacks instead. Clearly, we already have many options for performing asynchronous operations.

Kotlin Coroutines offer much more than that. They are an implementation of a concept that was first described in 19631 but waited years for a proper industry-ready implementation2. It connects powerful capabilities presented by half-century-old papers, with a library that is designed to perfectly help in real-life use cases. What is more, Kotlin coroutines are multiplatform, which means they can be used across all Kotlin platforms (like JVM, JS, iOS, and in the common modules as well). Finally, they do not change the code structure drastically. We can use most Kotlin coroutines capabilities nearly effortlessly (what we cannot say about RxJava or callbacks). This makes them beginner-friendly3.

Let’s see it in practice. We will explore how different common use-cases are solved by this and other common approaches. I separated two typical use-cases: Android and backend business logic implementation.

Coroutines on Android (and other frontend applications)

When you implement application logic on the frontend, what you most often need to do is to:

  1. get some data from one or many sources (API, view element, database, preferences, another application),
  2. process these data,
  3. do something with the data (display it in the view, store it on a database, send it to API).

To make our discussion more practical, let's first assume we are developing an Android application. We will start with a problem, where we need to get news from API, sort them, and display them on the screen. This is a direct representation of what we want our function to do:

fun onCreate() { val news = getNewsFromApi() val sortedNews = news .sortedByDescending { it.publishedAt } view.showNews(sortedNews) }

Sadly, this cannot be done so easily. If we start this function from the main thread (the only one capable of updating the view on the application), we cannot block it when we get news from the API, because it would block our whole application (which would lead to our application not responding). If we switch to run on another thread, we will not be able to show news on the view, because this can only be done on the main thread.

Thread switching

We could solve these problems by switching threads, as presented in the code below.

fun onCreate() { thread { val news = getNewsFromApi() val sortedNews = news .sortedByDescending { it.publishedAt } runOnUiThread { view.showNews(sortedNews) } } }

Such thread switching can still be found in some applications, but it is known for being problematic for several reasons:

  • There is no mechanism here to cancel those threads, so we often face memory leaks.
  • Making so many threads is costly.
  • Frequently switching threads is confusing and hard to manage.
  • The code will unnecessarily get bigger and more complicated.

Considering all these problems, we need to find a better mechanism.

Callbacks

There is one pattern that can help here: Callbacks. When we use that pattern, we pass a function to another function that should be called once the data are ready to be used. The callback function starts a process of obtaining data, and the rest happens in some other thread. Once the data is obtained, our callback gets called. This pattern can help us as follows:

fun onCreate() { getNewsFromApi { news -> val sortedNews = news .sortedByDescending { it.publishedAt } view.showNews(sortedNews) } }

We still might face memory leaks, since we do not cancel threads that are not needed anymore, but at least the callback functions are taking the responsibility of switching threads. This architecture has many downsides. To understand them, let's discuss a more complex case, where we need to get data from three endpoints:

fun showNews() { getConfigFromApi { config -> getNewsFromApi(config) { news -> getUserFromApi { user -> view.showNews(user, news) } } } }

This code is far from perfect, for several reasons:

  • Getting news and user data might be parallelized, but our current callback architecture doesn’t support that (it would be hard to achieve it with callbacks).
  • More and more indentations make this code hard to read. A code with multiple callbacks is often considered highly unreadable. Such a situation is called "callback hell", and can be found especially in some older Node.JS projects:

  • When we use callbacks, it is hard to control what happens after what. The following way to show progress indicator will not work:
fun onCreate() { showProgressBar() showNews() hideProgressBar() // Wrong }

The progress bar will be hidden just after starting the process of showing news, so practically immediately after it has been shown. To make it work, we would need to pass hiding the progress bar as a callback.

fun onCreate() { showProgressBar() showNews { hideProgressBar() } }
  • Callbacks often need to be used in many places, and our code gets unnecessarily complicated.
  • Callbacks do not support cancellation, and we are dealing with memory leaks.

That's why the callback architecture is far from perfect for non-trivial projects. Let's take a look at another approach: RxJava & Reactive Streams

RxJava and other reactive streams

An alternative approach that is popular in Java (both in Android and backend) is using reactive streams (or Reactive Extensions): RxJava or its successor Reactor. With this approach, all the operations happen inside a stream of data that can be started, processed, and observed. Those streams support thread-switching and concurrent processing, so they are often used to parallelize processing in our applications.

This is how we might solve our problem using RxJava:

fun onCreate() { disposables += getNewsFromApi() .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .map { news -> news.sortedByDescending { it.publishedAt } } .subscribe { sortedNews -> view.showNews(sortedNews) } }

The disposables in the above example are needed to cancel this stream if (for example) the user exits the screen.

This is definitely a better solution than callbacks: no data leaks, cancellation is supported, properly using threads. The only problem is that it is complicated. If you compare it with the "ideal" code from the beginning (also shown below), they have very little in common.

fun onCreate() { val news = getNewsFromApi() val sortedNews = news .sortedByDescending { it.publishedAt } view.showNews(sortedNews) }

All those functions like subscribeOn, observeOn, map or subscribe need to be learned. Cancelling needs to be explicit. Functions need to return objects wrapped inside Observable or Single.

fun getNewsFromApi(): Single<List<News>>

Think of the second problem, where we need to call three endpoints before showing data. It can be solved with RxJava properly, but it is even more complicated.

fun showNews() { disposables += Observable.zip( getConfigFromApi().flatMap { getNewsFromApi(it) }, getUserFromApi(), Function2 { news: List<News>, config: Config -> Pair(news, config) }) .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe { (news, config) -> view.showNews(news, config) } }

This code is truly concurrent and has no memory leaks, but we need to introduce RxJava functions such as zip, flatMap, packing a value into Pair, and destructuring it. This is a good implementation, it is just quite complicated. So finally, let's see what coroutines offer us.

Using Kotlin coroutines

The core functionality that Kotlin coroutines introduce is the ability to suspend a coroutine at some point, and resume it at a later point in the future. Thanks to that, we might run our code on the main thread and suspend it when we request data from the API. When a coroutine is suspended, the thread is not blocked and is free to go. It can be used to change the view or process other coroutines. Once the data are ready, the coroutine is waiting for the main thread (it is a rare situation, but there might be a queue of coroutines waiting for it), and once it gets the thread, it can continue from the point where it stopped.

One thread can be used to handle one coroutine, but once the coroutine is suspended, the thread is used to run another one. Once the first one is resumed, it waits for this thread to be available again and uses it at the first opportunity.

By definition, coroutines are components that can be suspended and resumed. Concepts like async/await and generators, which can be found in languages like JavaScript, Rust or Python, also use coroutines. Although their capabilities are very limited.

So our first problem might be solved by using Kotlin coroutines in the following way:

fun onCreate() { scope.launch { val news = getNewsFromApi() val sortedNews = news .sortedByDescending { it.publishedAt } view.showNews(sortedNews) } }

This code is nearly identical to what we want since the beginning! In this solution, the code runs on the main thread, but it is never blocking it. Thanks to the suspension mechanism, we are suspending (instead of blocking) the coroutine, when we need to wait for data. When the coroutine is suspended, the main thread can go do other things, like drawing a beautiful progress bar animation. Once the data are ready, our coroutine takes the main thread again and starts from where it previously stopped.

How about the other problem, with three calls? It could be solved similarly:

fun showNews() { scope.launch { val config = getConfigFromApi() val news = getNewsFromApi(config) val user = getUserFromApi() view.showNews(user, news) } }

This is a good-looking solution, but the way it works is not optimal. Those calls will happen sequentially (one after another), so if each of them takes 1 second, the whole function will take 3 seconds, instead of 2 seconds, which we can achieve if the API calls execute in parallel. This is where the Kotlin coroutines library helps us with functions like async, which can be used to immediately start another coroutine with some request, and wait for its result later (with the await function).

fun showNews() { scope.launch { val config = async { getConfigFromApi() } val news = async { getNewsFromApi(config.await()) } val user = async { getUserFromApi() } view.showNews(user.await(), news.await()) } }

This code is still simple and readable. It uses the async/await pattern that is popular in other languages including JavaScript or C#. It is also efficient, and has no memory leaks (assuming we use cancellation, which can be done effortlessly, as we will explain later). The code is both simple and well implemented.

With Kotlin coroutines, we can easily implement different use-cases and use other Kotlin features. For instance, they do not block us from using for-loop or collection processing functions. Below you can see how the next pages might be downloaded in parallel or one after another.

// all pages will be loaded simultaneously fun showAllNews() { scope.launch { val allNews = (0 until getNumberOfPages()) .map { page -> async { getNewsFromApi(page) } } .flatMap { it.await() } view.showAllNews(allNews) } } // next pages are loaded one after another fun showPagesFromFirst() { scope.launch { for (page in 0 until getNumberOfPages()) { val news = getNewsFromApi(page) view.showNextPage(news) } } }

Coroutines on the backend

The backend developers do not have a problem with the main thread, but blocking threads is still not great. Threads are costly. They need to be created, maintained, they need to allocate their memory4. If your application is used by millions of users, and you are blocking whenever you wait for a response from a database or another service, this adds up to a significant cost in memory and in processor use (for creation, maintenance, and synchronization of those threads).

This problem can be visualized with the following snippets, whichthat simulate a backend service with 100 000 users asking for data. The first snippet starts 100 000 threads and makes them sleep for a second (to simulate waiting for a response from a database or other service). If you run it on your computer, you will see it takes a while to print all those dots, or it will break with the OutOfMemoryError exception. This is the cost of running so many threads. The second snippet uses coroutines instead of threads and suspends them instead of making them sleep. If you run it, the program will wait for a second and then print all the dots. The cost of starting all those coroutines is so cheap that it is barely noticeable.

import kotlin.concurrent.thread fun main() { repeat(100_000) { thread { Thread.sleep(1000L) print(".") } } }
import kotlinx.coroutines.* fun main() = runBlocking { repeat(100_000) { launch { delay(1000L) print(".") } } }

Kotlin coroutines on the backend offer simplicity. In most cases, we are just using suspending functions on other suspending functions. In such a case, we can just forget we are using coroutines when at the same time they help us by making our code more efficient. When we need to introduce some concurrency, we can do that easily using features like async, Channel, or Flow. The below example shows a suspending function calling another suspending function. When we use coroutines, the only difference is that most functions are marked as with suspend modifier. The second snippet shows how concurrency can easily be used on suspending functions - we just wrapped a function with coroutineScope and inside we can freely use coroutine builders like async.

suspend fun getArticle( articleKey: String, lang: Language ): ArticleJson? { return articleRepository.getArticle(articleKey, lang) ?.let { toArticleJson(it) } } suspend fun getAllArticles( userUuid: String?, lang: Language ): List<ArticleJson> = coroutineScope { val user = async { userRepo.findUserByUUID(userUuid) } val articles = articleRepo.getArticles(lang) articles .filter { hasAccess(user.await(), it) } .map { toArticleJson(it) } }

Conclusion

I hope that you feel convinced to learn more about Kotlin coroutines now. It is much more than just a library, and it makes concurrent programming as easy as we could make it with modern tools. If we have that settled, let's start learning. Through the rest of this part, we will explore how suspension works - first from the usage point of view, then under the hood. In the second part, we will cover the essential concepts and tools offered by the Kotlin Coroutines library (named kotlinx.coroutines). The third part is about Channel and Flow, which are in a way alternative to RxJava or Reactor. Finally, in the last part, we will look at the bigger picture, and summarize the most problematic concepts and the best practices for Kotlin coroutines. Ready? So let's start the adventure.

1:

Conway, Melvin E. (July 1963). "Design of a Separable Transition-diagram Compiler". Communications of the ACM. ACM. 6 (7): 396–408. doi:10.1145/366663.366704. ISSN 0001-0782. S2CID 10559786

2:

Some say that the first industry-ready and universal coroutines were introduced by Go, often called Goroutines. It is worth mentioning that coroutines were also implemented in some older languages, like Lisp, but they haven’t been popularized. I believe it is because their implementation wasn’t designed to support real-life cases. Lisp (just like Haskell) was rather treated as a playground for scientists, rather than as a language for professionals.

3:

It does not change the fact that we should understand them to use them well.

4:

Most often, the default size of the thread stack is 1 MB. It does not mean 1 MB * {the number of threads} will be used, due to Java optimizations, but it is still a lot of extra memory spent just because we create threads.