Quantcast
Channel: all and sundry
Viewing all 250 articles
Browse latest View live

Kotlin - tail recursion optimization

$
0
0
Kotlin compiler optimizes tail recursive calls with a few catches. Consider a rank function to search for the index of an element in a sorted array, implemented the following way using tail recursion and a test for it:

fun rank(k: Int, arr: Array<Int>): Int {
tailrec fun rank(low: Int, high: Int): Int {
if (low > high) {
return -1
}
val mid = (low + high) / 2

return when {
(k < arr[mid]) -> rank(low, mid)
(k > arr[mid]) -> rank(mid + 1, high)
else -> mid
}
}

return rank(0, arr.size - 1)
}

@Test
fun rankTest() {
val array = arrayOf(2, 4, 6, 9, 10, 11, 16, 17, 19, 20, 25)
assertEquals(-1, rank(100, array))
assertEquals(0, rank(2, array))
assertEquals(2, rank(6, array))
assertEquals(5, rank(11, array))
assertEquals(10, rank(25, array))
}

IntelliJ provides an awesome feature to show the bytecode of any Kotlin code, along the lines of the following screenshot:



A Kotlin equivalent of the type of bytecode that the Kotlin compiler generates is the following:

fun rankIter(k: Int, arr: Array<Int>): Int {
fun rankIter(low: Int, high: Int): Int {
var lo = low
var hi = high
while (lo <= hi) {
val mid = (lo + hi)/2

if (k < arr[mid]) {
hi = mid
} else if (k > arr[mid]){
lo = mid + 1
} else {
return mid
}

}
return -1
}

return rankIter(0, arr.size - 1)
}

the tail calls have been translated to a simple loop.


There are a few catches that I could see though:
1. The compiler has to be explicitly told which calls are tail-recursive using the "tailrec" modifier
2. Adding tailrec modifier to a non-tail-recursive function does not generate compiler errors, though a warning does appear during compilation step


Kotlin - Try type for functional exception handling

$
0
0
Scala has a Try type to functionally handle exceptions. I could get my head around using this type using the excellent Neophyte's guide to Scala by Daniel Westheide. This post will replicate this type using Kotlin.


Background


Consider a simple function which takes two String, converts them to integer and then divides them(sample based on scaladoc of Try) :

fun divide(dividend: String, divisor: String): Int {
val num = dividend.toInt()
val denom = divisor.toInt()
return num / denom
}

It is the callers responsibility to ensure that any exception that is propagated from this implementation is handled appropriately using the exception handling mechanism of Java/Kotlin:

try {
divide("5t", "4")
} catch (e: ArithmeticException) {
println("Got an exception $e")
} catch (e: NumberFormatException) {
println("Got an exception $e")
}

My objective with the "Try" code will be to transform the "divide" to something which looks like this:

fun divideFn(dividend: String, divisor: String): Try<Int> {
val num = Try { dividend.toInt() }
val denom = Try { divisor.toInt() }
return num.flatMap { n -> denom.map { d -> n / d } }
}

A caller of this variant of "divide" function will not have an exception to handle through a try/catch block, instead, it will get back the exception as a value which it can introspect and act on as needed.

val result = divideFn("5t", "4")
when(result) {
is Success -> println("Got ${result.value}")
is Failure -> println("An error : ${result.e}")
}

Kotlin implementation

The "Try" type has two implementations corresponding to the "Success" path or a "Failure" path and implemented as a sealed class the following way:

sealed class Try<out T> {}

data class Success<out T>(val value: T) : Try<T>() {}

data class Failure<out T>(val e: Throwable) : Try<T>() {}

The "Success" type wraps around the successful result of an execution and "Failure" type wraps any exception thrown from the execution.

So now, to add some meat to these, my first test is to return one of these types based on a clean and exceptional implementation, along these lines:

val trySuccessResult: Try<Int> = Try {
4 / 2
}
assertThat(trySuccessResult.isSuccess()).isTrue()


val tryFailureResult: Try<Int> = Try {
1 / 0
}
assertThat(tryFailureResult.isFailure()).isTrue()

This can be achieved through a "companion object" in Kotlin, similar to static methods in Java, it returns either a Success type or a Failure type based on the execution of the lambda expression:

sealed class Try<out T> {
...
companion object {
operator fun <T> invoke(body: () -> T): Try<T> {
return try {
Success(body())
} catch (e: Exception) {
Failure(e)
}
}
}
...
}

Now that a caller has a "Try" type, they can check whether it is a "Success" type or a "Failure" type using the "when" expression like before, or using "isSuccess" and "isFailure" methods which are delegated to the sub-types like this:

sealed class Try<out T> {
abstract fun isSuccess(): Boolean
abstract fun isFailure(): Boolean
}

data class Success<out T>(val value: T) : Try<T>() {
override fun isSuccess(): Boolean = true
override fun isFailure(): Boolean = false
}

data class Failure<out T>(val e: Throwable) : Try<T>() {
override fun isSuccess(): Boolean = false
override fun isFailure(): Boolean = true
}

in case of Failure a default can be returned to the caller, something like this in a test:

val t1 = Try { 1 }

assertThat(t1.getOrElse(100)).isEqualTo(1)

val t2 = Try { "something" }
.map { it.toInt() }
.getOrElse(100)

assertThat(t2).isEqualTo(100)

again implemented by delegating to the subtypes:
sealed class Try<out T> {
abstract fun get(): T
abstract fun getOrElse(default: @UnsafeVariance T): T
abstract fun orElse(default: Try<@UnsafeVariance T>): Try<T>
}

data class Success<out T>(val value: T) : Try<T>() {
override fun getOrElse(default: @UnsafeVariance T): T = value
override fun get() = value
override fun orElse(default: Try<@UnsafeVariance T>): Try<T> = this
}

data class Failure<out T>(val e: Throwable) : Try<T>() {
override fun getOrElse(default: @UnsafeVariance T): T = default
override fun get(): T = throw e
override fun orElse(default: Try<@UnsafeVariance T>): Try<T> = default
}


The biggest advantage of returning a "Try" type, however, is in chaining further operations on the type.

Chaining with map and flatMap

"map" operation is passed a lambda expression to transform the value in some form - possibly even to a different type:

val t1 = Try { 2 }

val t2 = t1.map({ it * 2 }).map { it.toString()}

assertThat(t2).isEqualTo(Success("4"))

Here a number is being doubled and then converted to a string. If the initial Try were a "Failure" then the final value will simply return the "Failure" along the lines of this test:

val t1 = Try {
2 / 0
}

val t2 = t1.map({ it * 2 }).map { it * it }

assertThat(t2).isEqualTo(Failure<Int>((t2 as Failure).e))


Implementing "map" is fairly straightforward:

sealed class Try<out T> {
fun <U> map(f: (T) -> U): Try<U> {
return when (this) {
is Success -> Try {
f(this.value)
}
is Failure -> this as Failure<U>
}
}
}


flatmap, on the other hand, takes in a lambda expression which returns another "Try" type and flattens the result back into a "Try" type, along the lines of this test:

val t1 = Try { 2 }

val t2 = t1
.flatMap { i -> Try { i * 2 } }
.flatMap { i -> Try { i.toString() } }

assertThat(t2).isEqualTo(Success("4"))

Implementing this is simple too, along the following lines:

sealed class Try<out T> {
fun <U> flatMap(f: (T) -> Try<U>): Try<U> {
return when (this) {
is Success -> f(this.value)
is Failure -> this as Failure<U>
}
}
}

The "map" and "flatMap" methods are the power tools of this type, allowing chaining of complex operations together focusing on the happy path.


Conclusion

Try is a powerful type allowing a functional handling of exceptions in the code. I have a strawman implementation using Kotlin available in my github repo here - https://github.com/bijukunjummen/kfun

Kotlin - Tuple type

$
0
0
It is very simple to write a Tuple type with the expressiveness of Kotlin. My objective expressed in tests is the following:

1. Be able to define a Tuple of up to 5 elements and be able to retrieve the elements using an index like placeholder, in a test expressed with 2 elements like this:

val tup = Tuple("elem1", "elem2")
assertThat(tup._1).isEqualTo("elem1")
assertThat(tup._2).isEqualTo("elem2")

2. Be able to de-construct the constituent types along the following lines:
val tup = Tuple("elem1", "elem2")
val (e1, e2) = tup

assertThat(e1).isEqualTo("elem1")
assertThat(e2).isEqualTo("elem2")


Implementation

The implementation for a tuple of 2 elements is the following in its entirity:

data class Tuple2<out A, out B>(val _1: A, val _2: B)

A Kotlin data class provides all the underlying support of being able to retrieve the individual fields and the ability to destructure using an expression like this:

val (e1, e2) = Tuple2("elem1", "elem2")

All that I need to do at this point is provide a helper that creates a tuple of appropriate size based on the number of arguments provided which I have defined as follows:

object Tuple {
operator fun <A> invoke(_1: A): Tuple1<A> = Tuple1(_1)
operator fun <A, B> invoke(_1: A, _2: B): Tuple2<A, B> = Tuple2(_1, _2)
operator fun <A, B, C> invoke(_1: A, _2: B, _3: C): Tuple3<A, B, C> = Tuple3(_1, _2, _3)
operator fun <A, B, C, D> invoke(_1: A, _2: B, _3: C, _4: D): Tuple4<A, B, C, D> = Tuple4(_1, _2, _3, _4)
operator fun <A, B, C, D, E> invoke(_1: A, _2: B, _3: C, _4: D, _5: E): Tuple5<A, B, C, D, E> = Tuple5(_1, _2, _3, _4, _5)
}

which allows me to define tuples of different sizes using a construct which looks like this:

val tup2 = Tuple("elem1", "elem2")
val tup3 = Tuple("elem1", "elem2", "elem3")
val tup4 = Tuple("elem1", "elem2", "elem3", "elem4")

A little more twist, typically a Pair type is an alias for Tuple with 2 elements and Triple is an alias for a Tuple of 3 elements, this can be trivially defined in Kotlin the following way:

typealias Pair<A, B> = Tuple2<A, B>
typealias Triple<A, B, C> = Tuple3<A, B, C>

Simple indeed! a more filled-in sample is available in my github repo here - https://github.com/bijukunjummen/kfun

Spring Based Application - Migrating to Junit 5

$
0
0
This is a quick write-up on migrating a Gradle based Spring Boot app from Junit 4 to the shiny new Junit 5. Junit 4 tests continue to work with Junit 5 Test Engine abstraction which provides support for tests written in different programming models, in this instance, Junit 5 supports a Vintage Test Engine with the ability to run JUnit 4 tests.


Here is a sample project with JUnit 5 integrations already in place along with sample tests in Junit 4 and Junit 5 - https://github.com/bijukunjummen/boot2-with-junit5-sample

Sample Junit 4 candidate test

As a candidate project, I have a Spring Boot 2 app with tests written in Kotlin using Junit 4 as the testing framework. This is how a sample test looks with all dependencies explicitly called out. It uses the Junit4's @RunWith annotation to load up the Spring Context:

import org.assertj.core.api.Assertions.assertThat
import org.junit.Test
import org.junit.runner.RunWith
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.boot.test.autoconfigure.web.reactive.WebFluxTest
import org.springframework.test.context.junit4.SpringRunner
import org.springframework.test.web.reactive.server.WebTestClient
import java.nio.charset.StandardCharsets

@RunWith(SpringRunner::class)
@WebFluxTest(controllers = arrayOf(RouteConfig::class))
class SampleJunit4Test {

@Autowired
lateinit var webTestClient: WebTestClient

@Test
fun `get of hello URI should return Hello World!`() {
webTestClient.get()
.uri("/hello")
.exchange()
.expectStatus().isOk
.expectBody()
.consumeWith({ m ->
assertThat(String(m.responseBodyContent, StandardCharsets.UTF_8)).isEqualTo("Hello World!")
})

}

}

the Junit 4 dependencies are pulled in transitively via "spring-boot-starter-test" module:

testCompile('org.springframework.boot:spring-boot-starter-test')


Junit 5 migration


The first step to do is to pull in the Junit 5 dependencies along with Gradle plugin which enables running the tests:

Plugin:

buildscript {
dependencies {
....
classpath 'org.junit.platform:junit-platform-gradle-plugin:1.0.2'
}
}
apply plugin: 'org.junit.platform.gradle.plugin'

Dependencies:

testCompile("org.junit.jupiter:junit-jupiter-api")
testRuntime("org.junit.jupiter:junit-jupiter-engine")
testRuntime("org.junit.vintage:junit-vintage-engine:4.12.2")

With these changes in place, all the Junit 4 tests will continue to run both in IDE and when the Gradle build is executed and at this point, the tests itself can be slowly migrated over.

The test which I had shown before looks like this with Junit 5 Jupiter which provides the programming model for the tests:

import org.junit.jupiter.api.Assertions.assertEquals
import org.junit.jupiter.api.Test
import org.junit.jupiter.api.extension.ExtendWith
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.boot.test.autoconfigure.web.reactive.WebFluxTest
import org.springframework.test.context.junit.jupiter.SpringExtension
import org.springframework.test.web.reactive.server.WebTestClient
import java.nio.charset.StandardCharsets

@ExtendWith(SpringExtension::class)
@WebFluxTest(controllers = arrayOf(RouteConfig::class))
class SampleJunit5Test {

@Autowired
lateinit var webTestClient: WebTestClient

@Test
fun `get of hello URI should return Hello World!`() {
webTestClient.get()
.uri("/hello")
.exchange()
.expectStatus().isOk
.expectBody()
.consumeWith({ m ->
assertEquals("Hello World!", String(m.responseBodyContent, StandardCharsets.UTF_8))
})
}

}

Note that now instead of using JUnit 4 @RunWith annotation, I am using the @ExtendWith annotation and providing SpringExtension as a parameter which is responsible for loading up the Spring Context like before. Rest of the Spring annotations will continue to work with JUnit 5. This way tests can be slowly moved over from JUnit 4 to JUnit 5.


Caveats

Not everything is smooth though, there are a few issues in migrating from JUnit 4 to JUnit 5, the biggest of them is likely the support for JUnit @Rule and @ClassRule annotation and the JUnit 5 documentation does go into details on how it can be mitigated.

Kotlin - Reified type parameters sample

$
0
0
This post walks through a sample that demonstrates Kotlin's ability to cleverly reify generic type parameters.

So consider first a world where Kotlin does not support this feature, if we were using the Jackson library to convert a JSON to a Map with String based keys and Integer based values, I would use a code along these lines:

@Test
fun `sample parameterized retrieval raw object mapper`() {
val objectMapper = ObjectMapper()
val map: Map<String, Int> = objectMapper.readValue("""
| {
| "key1": 1,
| "key2": 2,
| "key3": 3
| }
""".trimMargin(), object : TypeReference<Map<String, Int>>() {})

assertThat(map).isEqualTo(mapOf("key1" to 1, "key2" to 2, "key3" to 3))
}

TypeReference used above implements a pattern called Super type token which allows the type of a parameterized type to be captured by sub-classing. Note the ugly way to creating an anonymous sub-class in Kotlin.

object : TypeReference<Map<String, Int>>() {}


What I would like to do is to invoke the ObjectMapper the following way instead:

@Test
fun `sample parameterized retrieval`() {
val om = ObjectMapper()
val map: Map<String, Int> = om.readValue("""
| {
| "key1": 1,
| "key2": 2,
| "key3": 3
| }
""".trimMargin())

assertThat(map).isEqualTo(mapOf("key1" to 1, "key2" to 2, "key3" to 3))
}

The generic type parameter is being inferred based on the type of what is to be returned (the left-hand side).


This can be achieved using an extension function on ObjectMapper which looks like this:

inline fun <reified T> ObjectMapper.readValue(s: String): T = 
this.readValue(s, object : TypeReference<T>() {})

The inline function is the heart of the support for being able to reify generic type parameter here - after compilation, the function would be expanded out into any place this function is called and thus the second version is exactly same as the first version of the test but reads far better than before.


Note that Jackson already implements these Kotlin extension functions in the excellent jackson-module-kotlin library.

Spring Boot 2 Applications and OAuth 2 - Setting up an Authorization Server

$
0
0
This will be a 3 post series exploring ways to enable SSO with an OAuth2 provider for Spring Boot 2 based applications. I will cover the following in these posts:

1. Ways to bootstrap an OpenID Connect compliant OAuth2 Authorization Server/OpenID Provider.
2. Legacy Spring Boot/Spring 5 approach to integrating with an OAuth2 Authorization Server/OpenID Provider.
3. Newer Spring Boot 2/Spring 5 approach to integrating with an OAuth2 Authorization Server/OpenID Provider.

This post will cover ways to bootstrap an OpenID Connect compliant OAuth2 Authorization Server running on a local machine.

The post is essentially a rehash of an earlier post which went into details of bootstrapping an OAuth2 authorization server using the excellent Cloud Foundry UAA project. There are a few changes since my previous post and I wanted to capture afresh the steps to bring up an Authorization server with a little more emphasis on changes to make it OpenID Connect compliant.

The best way to get a local version of a robust OAuth2 Authorization server running is to use the excellent Cloud Foundry UAA project.

Step 1: Clone the project:

git clone https://github.com/cloudfoundry/uaa

Step 2: Generate a keypair
UAA can make use of an asymmetric RSA keypair for signing and let clients verify the signature. I have a handy script available here which generates a keypair and generates a configuration file that can used for bootstrapping UAA:



When run this executes a UAA configuration which looks like this:

jwt:
token:
signing-key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAuE5Ds...5Nka1vOTnjDgKIfsN
NTAI25qNNCZOXXnGp71gMWsXcLFq4JDJTovL4/rzPIip/1xU0LjFSw==
-----END RSA PRIVATE KEY-----
verification-key: |
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuE5DsCmjfvWArlCIOL6n
ZwIDAQAB
-----END PUBLIC KEY-----

Step 3: Use the configuration to start up the UAA server:

UAA_CONFIG_URL=file://$PWD/uaa_config.yml ./gradlew run 

Step 4: Validate
A quick way to validate if the UAA has started up is to check the JWKS_URI, this is an endpoint which exposes the set of verification keys that a client can use to validate the token. For UAA, this is available at "/token_keys" endpoint, with either curl or httpie this endpoint can be validated:

http GET http://localhost:8080/uaa/token_keys

# OR

curl http://localhost:8080/uaa/token_keys

if things are configured okay, an output of the following form is expected from this endpoint:

{
"keys": [
{
"alg": "RS256",
"e": "AQAB",
"kid": "legacy-token-key",
"kty": "RSA",
"n": "APLeBV3dcUrWuVEXRyFzNaOTeKOLwFjscxbWFGofCkxrp3r0nRbBBb4ElG4qYzmbStg5o-zXAPCOu7Pqy2j4PtC3OxLHWnKsflNOEWTeXhLkPE0IptHPbc6zgVPP3EoiG_umpm0BYeJPZZc-7tA11uU_3NqidY9wnpOgKBuwNmdoyUrjb4fBDoMr_Wk2_sn_mtHSG8HaX8eJ9SbC9xRCJySjJDApOYR_dKjuwpbcM2ITfbTzD9M2J7yOtoJRkFhd1Ug2t_6AA_z47BBws-x9BBfSNbYGsVlDAbe6NK_jUE",
"use": "sig",
"value": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA8t4FXd1xSta5URdHIXM1\no5N4o4vAWOxzFtYUah8KTGunevSdFsEFvgSUbipjOZtK2Dmj7NcA8I67s+rLaPg+\n0Lc7Esdacqx+U04RZN5eEuQ8TQim0c9tzrOBU8/cSiIb+6ambQF62glGQWF3VSDa3/oAD/PjsEHCz7H0EF9I1tgaxWUMBt7o0r+N\nQQIDAQAB\n-----END PUBLIC KEY-----"
}
]
}



Step 5: Populate Data
UAA has a companion CLI application called uaac, available here. Assuming that you have the uaac cli downloaded and UAA started up at its default port of 8080, let us start by pointing the uaac to the uaa application:

uaac target http://localhost:8080/uaa

and log into it using one of the canned client credentials(admin/adminsecret):

uaac token client get admin -s adminsecret

Now that a client has logged in, the token can be explored using :

uaac token decode

which should display the details of the logged in client:

jti: 4457847692b7464ca0320f08271a9e98
sub: admin
authorities: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
scope: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
client_id: admin
cid: admin
azp: admin
grant_type: client_credentials
rev_sig: 3c12911
iat: 1518332992
exp: 1518376192
iss: http://localhost:8080/uaa/oauth/token
zid: uaa

the raw jwt token can be obtained using the following command:

uaac context


with an output which looks like this:


[3]*[http://localhost:8080/uaa]
skip_ssl_validation: true

[2]*[admin]
client_id: admin
access_token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImxlZ2FjeS10b2tlbi1rZXkiLCJ0eXAiOiJKV1QifQ.eyJqdGkiOiI0NDU3ODQ3NjkyYjc0NjRjYTAzMjBmMDgyNzFhOWU5OCIsInN1YiI6ImFkbWluIiwiYXV0aG9yaXRpZXMiOlsiY2xpZW50cy5yZWFkIiwiY2xpZW50cy5zZWNyZXQiLCJjbGllbnRzLndyaXRlIiwidWFhLmFkbWluIiwiY2xpZW50cy5hZG1pbiIsInNjaW0ud3JpdGUiLCJzY2ltLnJlYWQiXSwic2NvcGUiOlsiY2xpZW50cy5yZWFkIiwiY2xpZW50cy5zZWNyZXQiLCJjbGllbnRzLndyaXRlIiwidWFhLmFkbWluIiwiY2xpZW50cy5hZG1pbiIsInNjaW0ud3JpdGUiLCJzY2ltLnJlYWQiXSwiY2xpZW50X2lkIjoiYWRtaW4iLCJjaWQiOiJhZG1pbiIsImF6cCI6ImFkbWluIiwiZ3JhbnRfdHlwZSI6ImNsaWVudF9jcmVkZW50aWFscyIsInJldl9zaWciOiIzYzEyOTExIiwiaWF0IjoxNTE4MzMyOTkyLCJleHAiOjE1MTgzNzYxOTIsImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6ODA4MC91YWEvb2F1dGgvdG9rZW4iLCJ6aWQiOiJ1YWEiLCJhdWQiOlsic2NpbSIsImNsaWVudHMiLCJ1YWEiLCJhZG1pbiJdfQ.ZEcUc4SvuwQYwdE0OeG5-l8Jh1HsP0JFI3aCob8A1zOcGOGjqso4j1-k_Lzm__pGZ702v4_CkoXOBXoqaaRbfVgJybBvOWbWsUZupMVMlEsyaR_j8DWY8utFAIiN2EsQgjG3qLrsf0K8lm0I3_UIEjaNZhSkWSLDLyY9wr_2SRanSf8LkcEJoSTTgDdO0aP8MvwNpDG7iQ2Om1HZEN08Bed1hHj6e1E277d9Kw7gutgCBht5GZDPFnI6Rjn0O5wimgrAa6FEDjdCpR7hy2P5RiOTcTvjj3rXtVJyVcQcxGKymZrY2WOx1mIEzEIAj8NYlw0TLuSVVOiNZ9fKlRiMpw
token_type: bearer
expires_in: 43199
scope: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
jti: 4457847692b7464ca0320f08271a9e98

Finally to add a client with creds of client1/client1 and a user with a creds of user1/user1:

uaac client add client1 \
--name client1 \
--scope resource.read,resource.write,openid \
-s client1 \
--authorized_grant_types authorization_code,refresh_token,client_credentials,password \
--authorities uaa.resource \
--redirect_uri http://localhost:8888/**


# Add a user called user1/user1
uaac user add user1 -p user1 --emails user1@user1.com


# Add two scopes resource.read, resource.write
uaac group add resource.read
uaac group add resource.write

# Assign user1 both resource.read, resource.write scopes..
uaac member add resource.read user1
uaac member add resource.write user1


At this point we have a working Authorization Server with a sample client and a sample user available. The subsequent posts will make use of this data to enable authentication for a Sample Spring Boot2 application. I will update the links in this post as I complete the newer posts.

Spring Boot 2 Applications and OAuth 2 - Legacy Approach

$
0
0
This post is the second part of a 3 post series exploring ways to enable SSO with an OAuth2 provider for Spring Boot 2 based applications. The 3 posts are:

1. Ways to bootstrap an OpenID Connect compliant OAuth2 Authorization Server/OpenID Provider
2. Legacy Spring Boot/Spring 5 approach to integrating with an OAuth2 Authorization Server/OpenID Provider - this post
3. Newer Spring Boot 2/Spring 5 approach to integrating with an OAuth2 Authorization Server/OpenID Connect Provider - coming soon


The post will explore a legacy Spring Boot 2/Spring Security 5 approach to enabling OAuth2 based authentication mechanism for an application, this post assumes that all the steps in the previous blog post have been followed and UAA is up and running.

A question that probably comes to mind is why I am talking about legacy in the context of Spring Boot 2/Spring Security 5 when this should have been the new way of doing SSO! The reason is, as developers we have been using an approach with Spring Boot 1.5.x that is now considered deprecated, there are features in it however that has not been completely ported over to the new approach(ability to spin up an OAuth2 authorization server and ability to create an OAuth2 resource server are examples), in the interim, Spring Security developers(thanks Rob Winch& Joe Grandja) provided a bridge to the legacy approach in the form of a spring-security-oauth2-boot project.

Approach

So what does the legacy approach look like - I have detailed it once before here, to recap it works on the basis of an annotation called @EnableOAuth2SSO and a set of properties supporting this annotation, a sample security configuration looks like this -

import org.springframework.boot.autoconfigure.security.oauth2.client.EnableOAuth2Sso;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.builders.WebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@EnableOAuth2Sso
@Configuration
public class OAuth2SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
public void configure(WebSecurity web) throws Exception {
super.configure(web);

web.ignoring()
.mvcMatchers("/favicon.ico", "/webjars/**", "/css/**");
}

@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable();

http.authorizeRequests()
.antMatchers("/secured/**")
.authenticated()
.antMatchers("/")
.permitAll()
.anyRequest()
.authenticated();
}

}

and the set of supporting properties to point to the UAA is the following:

ssoServiceUrl: http://localhost:8080/uaa

security:
oauth2:
client:
client-id: client1
client-secret: client1
access-token-uri: ${ssoServiceUrl}/oauth/token
user-authorization-uri: ${ssoServiceUrl}/oauth/authorize
resource:
jwt:
key-uri: ${ssoServiceUrl}/token_key
user-info-uri: ${ssoServiceUrl}/userinfo


With the spring-security-oauth2-boot project pulled in as a dependency:

compile 'org.springframework.cloud:spring-cloud-starter-oauth2'
compile("org.springframework.security.oauth.boot:spring-security-oauth2-autoconfigure:2.0.0.BUILD-SNAPSHOT")

these annotations just work for a Spring Boo2 application also. Note however Spring Boot 2 supports two distinct Web Frameworks - Spring Web and Spring Webflux, this approach pulls in Spring Web transitively which forces Spring Web as the default framework.

The sample in its entirety with ways to start it up is available in my github repo here - https://github.com/bijukunjummen/oauth2-boot2


Testing

Any uri starting with "/secured/**" is SSO enabled, if the index page is accessed it is displayed without needing any authentication:



Now, clicking through to a uri starting with "/secured/**" should trigger a OAuth2 Authorization Code flow:


and should present a login screen to the user via UAA:



Logging in with the credentials that were created before - user1/user1 should redirect the user back to the Spring Boot 2 legacy version of the app and should display the secured page:




This completes the legacy approach to SSO with Spring Boot 2. Note that this is just pseudo-authentication, OAuth2 is meant more for authorization to access a users resource than authentication the way it is used here. An article which clarifies this is available here. The next post with native Spring Security 5/Spring Boot2 will provide a cleaner authentication mechanism using OpenID Connect.

Spring Boot 2 native approach to SSO with OAuth 2/OpenID Connect

$
0
0
This post is the final part of a 3 post series exploring ways to enable SSO with an OAuth2 provider for Spring Boot 2 based applications. The 3 posts are:

1. Ways to bootstrap an OpenID Connect compliant OAuth2 Authorization Server/OpenID Provider
2. Legacy Spring Boot/Spring 5 approach to integrating with an OAuth2 Authorization Server/OpenID Provider
3. Newer Spring Boot 2/Spring 5 approach to integrating with an OAuth2 Authorization Server/OpenID Connect Provider - this post

This post will explore the shiny new way to enable SSO for a Spring Boot 2 application using the native OAuth2 support in Spring Security.

The post again assumes that everything described in the first post is completed.


Spring Boot 2 Auto-configuration


Spring Boot 2 provides an auto-configuration for native OAuth2 support in Spring Security ( see class org.springframework.boot.autoconfigure.security.oauth2.client.OAuth2ClientAutoConfiguration).
The auto-configuration is activated by the presence of "spring-security-oauth2-client" library available via the following gradle coordinates:

compile "org.springframework.security:spring-security-oauth2-client"

This auto-configuration works off a set of properties, for the UAA Identity provider that has been started up, the set of properties are the following:

uaa-base-url: http://localhost:8080/uaa

spring:
security:
oauth2:
client:
registration:
uaa:
client-id: client1
client-secret: client1
authorizationGrantType: authorization_code
redirect_uri_template: "{baseUrl}/login/oauth2/code/{registrationId}"
scope: resource.read,resource.write,openid,profile
clientName: oauth2-sample-client
provider:
uaa:
token-uri: ${uaa-base-url}/oauth/token
authorization-uri: ${uaa-base-url}/oauth/authorize
user-info-uri: ${uaa-base-url}/userinfo
jwk-set-uri: ${uaa-base-url}/token_keys
userNameAttribute: user_name

If I were to depend on Spring Boot 2 auto-configuration support for native OAuth2 support to do its magic and were to start the application up, I would be presented with this page on accessing the application:


Note that this login page is a default page created by Spring Security OAuth2 and by default presents the list of registrations.

Clicking on "oauth2-sample-client" presents the login page of the Identity provider, UAA in this instance:

For an OpenID Connect based flow, applications are issued an ID Token along with an Access Token which I am decoding and presenting on a page:



Customizations

One of the quick customizations that I want to make is to redirect to UAA on access of any secured page specified via a "/secured" uri pattern, the following is a set of configuration that should enable this:

package sample.oauth2.config

import org.springframework.context.annotation.Configuration
import org.springframework.security.config.annotation.web.builders.HttpSecurity
import org.springframework.security.config.annotation.web.builders.WebSecurity
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter

@Configuration
class OAuth2SecurityConfig : WebSecurityConfigurerAdapter() {
override fun configure(web: WebSecurity) {
super.configure(web)
web.ignoring()
.mvcMatchers(
"/favicon.ico",
"/webjars/**",
"/css/**"
)
}

override fun configure(http: HttpSecurity) {
http.csrf().disable()

http.authorizeRequests()
.antMatchers("/secured/**")
.authenticated()
.antMatchers("/", "/custom_login")
.permitAll()
.anyRequest()
.authenticated()
.and()
.oauth2Login()
.loginPage("/custom_login")
}
}

See the "/custom_login" being set as the URI above, which in turn simply hands over control to OAuth2 controlled endpoints which know to set the appropriate parameters and redirect to UAA:

@Controller
class LoginController {

@RequestMapping("/custom_login")
fun loginPage(): String {
return "redirect:/oauth2/authorization/uaa"
}
}



This concludes the exploration of native OAuth2 support in Spring Boo2 applications.

All of the samples are available in my github repo - https://github.com/bijukunjummen/oauth2-boot2


The following references were helpful in understanding the OAuth2 support:

1. Spring Security Documentation - https://docs.spring.io/spring-security/site/docs/current/reference/html/
2. Joe Grandja'sSpring One Platform 2017 Presentation - https://www.youtube.com/watch?v=WhrOCurxFWU

Kotlin and JUnit 5 @BeforeAll

$
0
0

Introduction

In Kotlin, classes do not have static methods. A Java equivalent semantic can be provided using the concept of a companion object though. This post will go into details of what it takes to support a JUnit 5 @BeforeAll and @AfterAll annotation which depend on the presence of a static methods in test classes.


BeforeAll and AfterAll in Java

Junit 5 @BeforeAll annotated methods are executed before all tests and @AfterAll is exected after all tests. These annotations are expected to be applied to static methods:

import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Junit5BeforeAllTest {

private static final Logger LOGGER = LoggerFactory.getLogger(Junit5BeforeAllTest.class);

@BeforeAll
static void beforeAll() {
LOGGER.info("beforeAll called");
}

@Test
public void aTest1() {
LOGGER.info("aTest1 called");
LOGGER.info(this.toString());
}

@Test
public void aTest2() {
LOGGER.info("aTest2 called");
LOGGER.info(this.toString());
}

@AfterAll
static void afterAll() {
LOGGER.info("afterAll called");
}
}

A rough flow is - the JUnit platform calls the "@BeforeAll" annotated methods, then for each test it creates an instance of the test class and invokes the test. After all tests are executed, the "@AfterAll" annotated static methods are called, this is borne out by the logs, see how the instance ids(from toString() of Object) is different:

2018-03-28 17:22:03.618  INFO   --- [           main] c.p.cookbook.Junit5BeforeAllTest         : beforeAll called
2018-03-28 17:22:03.652 INFO --- [ main] c.p.cookbook.Junit5BeforeAllTest : aTest1 called
2018-03-28 17:22:03.653 INFO --- [ main] c.p.cookbook.Junit5BeforeAllTest : com.pivotalservices.cookbook.Junit5BeforeAllTest@7bc1a03d
2018-03-28 17:22:03.663 INFO --- [ main] c.p.cookbook.Junit5BeforeAllTest : aTest2 called
2018-03-28 17:22:03.664 INFO --- [ main] c.p.cookbook.Junit5BeforeAllTest : com.pivotalservices.cookbook.Junit5BeforeAllTest@6591f517
2018-03-28 17:22:03.669 INFO --- [ main] c.p.cookbook.Junit5BeforeAllTest : afterAll called



This default lifecycle of a JUnit 5 test can be changed by an annotation though if the test class is annotated the following way:

@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class Junit5BeforeAllTest {
....
}

The advantage now is that the @BeforeAll and @AfterAll annotations can be placed on non-static methods, as the JUnit 5 platform can guarantee that these methods are exactly once before ALL tests. The catch though is that any instance-level state will not be reset before each test.

BeforeAll and AfterAll in Kotlin

So how does this translate to Kotlin -
For the default case of a new test instance per test, an equivalent Kotlin test code looks like this:

import org.junit.jupiter.api.AfterAll
import org.junit.jupiter.api.BeforeAll
import org.junit.jupiter.api.Test
import org.slf4j.LoggerFactory

class Junit5BeforeAllKotlinTest {

@Test
fun aTest1() {
LOGGER.info("aTest1 called")
LOGGER.info(this.toString())
}

@Test
fun aTest2() {
LOGGER.info("aTest2 called")
LOGGER.info(this.toString())
}

companion object {
private val LOGGER = LoggerFactory.getLogger(Junit5BeforeAllTest::class.java)


@BeforeAll
@JvmStatic
internal fun beforeAll() {
LOGGER.info("beforeAll called")
}

@AfterAll
@JvmStatic
internal fun afterAll() {
LOGGER.info("afterAll called")
}
}
}

A Kotlin companion object with methods annotated with @JvmStatic does the job.

Simpler is the case where the lifecycle is modified:

import org.junit.jupiter.api.AfterAll
import org.junit.jupiter.api.BeforeAll
import org.junit.jupiter.api.Test
import org.junit.jupiter.api.TestInstance
import org.slf4j.LoggerFactory

@TestInstance(TestInstance.Lifecycle.PER_CLASS)
class Junit5BeforeAllKotlinTest {

private val LOGGER = LoggerFactory.getLogger(Junit5BeforeAllTest::class.java)

@BeforeAll
internal fun beforeAll() {
LOGGER.info("beforeAll called")
}

@Test
fun aTest1() {
LOGGER.info("aTest1 called")
LOGGER.info(this.toString())
}

@Test
fun aTest2() {
LOGGER.info("aTest2 called")
LOGGER.info(this.toString())
}


@AfterAll
internal fun afterAll() {
LOGGER.info("afterAll called")
}
}


My personal preference is for the companion object approach as I like the idea of a deterministic state of the test instance before the test method is executed. Another advantage of the approach is with Spring Boot based tests where you want Spring to act on the test instance (inject dependencies, resolve properties etc) only after @BeforeAll annotated method is called, to make this more concrete consider the following example:

import org.assertj.core.api.Assertions.assertThat
import org.junit.jupiter.api.AfterAll
import org.junit.jupiter.api.BeforeAll
import org.junit.jupiter.api.Test
import org.junit.jupiter.api.extension.ExtendWith
import org.springframework.beans.factory.annotation.Value
import org.springframework.boot.test.context.SpringBootTest
import org.springframework.context.annotation.Configuration
import org.springframework.test.context.junit.jupiter.SpringExtension


@ExtendWith(SpringExtension::class)
@SpringBootTest
class BeforeAllSampleTest {

@Value("\${some.key}")
private lateinit var someKey: String


companion object {
@BeforeAll
@JvmStatic
fun beforeClass() {
System.setProperty("some.key", "some-value")
}

@AfterAll
@JvmStatic
fun afterClass() {
System.clearProperty("some.key")
}
}

@Test
fun testValidateProperties() {
assertThat(someKey).isEqualTo("some-value")
}

@Configuration
class SpringConfig
}

This kind of a test will not work at all if the lifecycle were changed to "@TestInstance(TestInstance.Lifecycle.PER_CLASS)"

Correction
Per comments from the one and only Sébastien Deleuze, the previous test can be simplified by injecting in dependencies and properties via constructor injection, so the test can be re-written as:

@ExtendWith(SpringExtension::class)
@SpringBootTest
class BeforeAllSampleTest(@Value("\${some.key}") val someKey: String) {

companion object {
@BeforeAll
@JvmStatic
fun beforeClass() {
System.setProperty("some.key", "some-value")
}

@AfterAll
@JvmStatic
fun afterClass() {
System.clearProperty("some.key")
}
}

@Test
fun testValidateProperties() {
assertThat(someKey).isEqualTo("some-value")
}

@Configuration
class SpringConfig
}

Reference

This stackoverflow answer was instrumental in my understanding of the nuances of JUnit 5 with Kotlin.

Spring Cloud Gateway - Configuring a simple route

$
0
0
Spring Cloud Gateway can be considered a successor to the Spring Cloud Netflix Zuul project and helps in implementing a Gateway pattern in a microservices environment. It is built on top of Spring Boot 2 and Spring Webflux and is non-blocking end to end - it exposes a Netty based server, uses a Netty based client to make the downstream microservice calls and uses reactor-core for the rest of the flow.


My objective here is to show how a small Spring Cloud Netflix Zuul based route can be translated in multiple ways using Spring Cloud Gateway.

Spring Cloud Netflix Zuul

Spring Cloud Zuul allows simple routing rules to be configured using property files expressed as a yaml here:

zuul:
routes:
sample:
path: /zuul/**
url: http://httpbin.org:80
strip-prefix: true


This route would expose an endpoint in Zuul which intercepts any requests made to uri's with a prefix of "/zuul" and forwards it to the downstream system after stripping out the "zuul" prefix.

Spring Cloud Gateway

Spring Cloud Gateway allows an equivalent functionality to be coded in three ways - using a Java based DSL, using Kotlin based DSL and using simple property based configuration.

A starter project can be generated using the excellent http://start.spring.io site:



Java Based DSL

A Java based dsl that creates a route similar to the Zuul route is the following:

import org.springframework.cloud.gateway.route.RouteLocator;
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class GatewayRoutes {

@Bean
public RouteLocator routeLocator(RouteLocatorBuilder builder) {
return builder.routes()
.route(r ->
r.path("/java/**")
.filters(
f -> f.stripPrefix(1)
)
.uri("http://httpbin.org:80")
)
.build();
}

}

This is a readable DSL that configures a route which intercepts uri's with a prefix of "java" and sends it to a downstream system after stripping out this prefix.

Kotlin Based DSL


A Kotlin based DSL to configure this route looks like this.

import org.springframework.cloud.gateway.route.RouteLocator
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder
import org.springframework.cloud.gateway.route.builder.filters
import org.springframework.cloud.gateway.route.builder.routes
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration

@Configuration
class KotlinRoutes {

@Bean
fun kotlinBasedRoutes(routeLocatorBuilder: RouteLocatorBuilder): RouteLocator =
routeLocatorBuilder.routes {
route {
path("/kotlin/**")
filters { stripPrefix(1) }
uri("http://httpbin.org")
}
}
}

I had originally submitted the PR for Kotlin based DSL for Spring Cloud Gateway routes and so have a bias towards using Kotlin for configuring Spring Cloud Gateway :-). The route takes in urls with a prefix of "kotlin" and strips it out before making the downstream microservice call.

Property based Route

And finally the property based one:

spring:
cloud:
gateway:
routes:
- predicates:
- Path=/props/**
filters:
- StripPrefix=1
uri: "http://httpbin.org"

This route like the Java and Kotlin version takes in a url with a prefix of "props" and strips this prefix out before making the downstream call. The properties based version has the added advantage of being refreshable at runtime.

Conclusion

This is a very quick intro to Spring Cloud Gateway by comparing how a typical configuration from Spring Cloud Netflix Zuul maps to Spring Cloud Gateway.

TestContainers and Spring Boot

$
0
0
TestContainers is just awesome! It provides a very convenient way to start up and CLEANLY tear down docker containers in JUnit tests. This feature is very useful for integration testing of applications against real databases and any other resource for which a docker image is available.

My objective is to demonstrate a sample test for a JPA based Spring Boot Application using TestContainers. The sample is based on an example at the TestContainer github repo.

Sample App


The Spring Boot based application is straightforward - It is a Spring Data JPA based application with the web layer written using Spring Web Flux. The entire sample is available at my github repo and it may be easier to just follow the code directly there.

The City entity being persisted looks like this (using Kotlin):

import javax.persistence.Entity
import javax.persistence.GeneratedValue
import javax.persistence.Id

@Entity
data class City(
@Id @GeneratedValue var id: Long? = null,
val name: String,
val country: String,
val pop: Long
) {
constructor() : this(id = null, name = "", country = "", pop = 0L)
}

All that is needed to provide a repository to manage this entity is the following interface, thanks to the excellent Spring Data JPA project:

import org.springframework.data.jpa.repository.JpaRepository
import samples.geo.domain.City

interface CityRepo: JpaRepository<City, Long>


I will not cover the web layer here as it is not relevant to the discussion.


Testing the Repository

Spring Boot provides a feature called the Slice tests which is a neat way to test different horizontal slices of the application. A test for the CityRepo repository looks like this:


import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.test.context.junit4.SpringRunner;
import samples.geo.domain.City;
import samples.geo.repo.CityRepo;

import static org.assertj.core.api.Assertions.assertThat;

@RunWith(SpringRunner.class)
@DataJpaTest
public class CitiesWithEmbeddedDbTest {

@Autowired
private CityRepo cityRepo;

@Test
public void testWithDb() {
City city1 = cityRepo.save(new City(null, "city1", "USA", 20000L));
City city2 = cityRepo.save(new City(null, "city2", "USA", 40000L));

assertThat(city1)
.matches(c -> c.getId() != null && c.getName() == "city1" && c.getPop() == 20000L);

assertThat(city2)
.matches(c -> c.getId() != null && c.getName() == "city2" && c.getPop() == 40000L);

assertThat(cityRepo.findAll()).containsExactly(city1, city2);
}

}

The "@DataJpaTest" annotation starts up an embedded h2 databases, configures JPA and loads up any Spring Data JPA repositories(CityRepo in this instance).

This kind of a test works well, considering that JPA provides the database abstraction and if JPA is used correctly the code should be portable across any supported databases. However, assuming that this application is expected to be run against a PostgreSQL in production, ideally, there would be some level of integration testing done against the database, which is where TestContainer fits in. It provides a way to boot up PostgreSQL as a docker container.

TestContainers

The same repository test using TestContainers looks like this:

import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.boot.test.util.TestPropertyValues;
import org.springframework.context.ApplicationContextInitializer;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import org.testcontainers.containers.PostgreSQLContainer;
import samples.geo.domain.City;
import samples.geo.repo.CityRepo;

import java.time.Duration;

import static org.assertj.core.api.Assertions.assertThat;

@RunWith(SpringRunner.class)
@DataJpaTest
@ContextConfiguration(initializers = {CitiesWithPostgresContainerTest.Initializer.class})
public class CitiesWithPostgresContainerTest {

@ClassRule
public static PostgreSQLContainer postgreSQLContainer =
(PostgreSQLContainer) new PostgreSQLContainer("postgres:10.4")
.withDatabaseName("sampledb")
.withUsername("sampleuser")
.withPassword("samplepwd")
.withStartupTimeout(Duration.ofSeconds(600));

@Autowired
private CityRepo cityRepo;

@Test
public void testWithDb() {
City city1 = cityRepo.save(new City(null, "city1", "USA", 20000L));
City city2 = cityRepo.save(new City(null, "city2", "USA", 40000L));

assertThat(city1)
.matches(c -> c.getId() != null && c.getName() == "city1" && c.getPop() == 20000L);

assertThat(city2)
.matches(c -> c.getId() != null && c.getName() == "city2" && c.getPop() == 40000L);

assertThat(cityRepo.findAll()).containsExactly(city1, city2);
}

static class Initializer
implements ApplicationContextInitializer<ConfigurableApplicationContext> {
public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
TestPropertyValues.of(
"spring.datasource.url=" + postgreSQLContainer.getJdbcUrl(),
"spring.datasource.username=" + postgreSQLContainer.getUsername(),
"spring.datasource.password=" + postgreSQLContainer.getPassword()
).applyTo(configurableApplicationContext.getEnvironment());
}
}
}

The core of the code looks same as the previous test, but the repository here is being tested against a real PostgreSQL database here. To go into a little more detail -

A PostgreSQL container is being started up using a JUnit Class Rule which gets triggered before any of the tests are run. This dependency is being pulled in using a gradle dependency of the following type:

    testCompile("org.testcontainers:postgresql:1.7.3")

The class rule starts up a PostgreSQL docker container(postgres:10.4) and configures a database, and credentials for the database. Now from Spring Boot's perspective, these details need to be passed on the application as properties BEFORE Spring starts creating a test context for the test to run in, and this is done for the test using an ApplicationContextInitializer, this is invoked by Spring very early in the lifecycle of a Spring Context.

The custom ApplicationContextInitializer which sets the database name, url and user credentials is hooked up to the test using this code:

...
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
...

@RunWith(SpringRunner.class)
@DataJpaTest
@ContextConfiguration(initializers = {CitiesWithPostgresContainerTest.Initializer.class})
public class CitiesWithPostgresContainerTest {
...

With this boiler plate set up in place TestContainer and Spring Boot slice test will take over running of the test. More importantly TestContainers also takes care of tear down, the JUnit Class Rule ensures that once the test is complete the containers are stopped and removed.

Conclusion

This was a whirlwind tour of TestContainers, there is far more to TestContainers than what I have covered here but I hope this provides a taste for what is feasible using this excellent library and how to configure it with Spring Boot. This sample is available at my github repo

Zuul 2 - Sample filter

$
0
0
Zuul 2 has finally been open sourced. I first heard of Zuul 2 during Spring One 2016 talk by Mikey Cohen is available here, it is good to finally be able to play with it.

To quickly touch on the purpose of a Gateway like Zuul 2 - Gateways provide an entry point to an ecosystem of microservices. Since all the customer requests are routed through the Gateway, it can control aspects of routing, request and response flowing through it -

  • Routing based on different criteria - uri patterns, headers etc.
  • Monitors service health
  • Loadbalancing and throttling requests to origin servers
  • Security
  • Canary testing


My objective in this post is simple - to write a Zuul2 filter that can remove a path prefix and send a request to a downstream service and back.

Zuul2 filters are the mechanism by which Zuul is customized. Say if a client sends a request to /passthrough/someapi call, then I want the Zuul 2 layer to forward the request to a downstream service using /someapi uri. Zuul2 filters are typically packaged up as groovy files and are dynamically loaded(and potentially refreshed) and applied. My sample here will be a little different though, my filters are coded in Java and I had to bypass the loading mechanism built into Zuul.

It may be easier simply to follow the code, which is available in my github repository here - https://github.com/bijukunjummen/boot2-load-demo/tree/master/applications/zuul2-sample, it is packaged in with a set of samples which provide a similar functionality. The code is based on the Zuul 2 samples available here.



This is how my filter looks:

import com.netflix.zuul.context.SessionContext;
import com.netflix.zuul.filters.http.HttpInboundSyncFilter;
import com.netflix.zuul.message.http.HttpRequestMessage;

import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;

public class StripPrefixFilter extends HttpInboundSyncFilter {
private final List<String> prefixPatterns;

public StripPrefixFilter(List<String> prefixPatterns) {
this.prefixPatterns = prefixPatterns;
}

@Override
public HttpRequestMessage apply(HttpRequestMessage input) {
SessionContext context = input.getContext();
String path = input.getPath();
String[] parts = path.split("/");
if (parts.length > 0) {
String targetPath = Arrays.stream(parts)
.skip(1).collect(Collectors.joining("/"));
context.set("overrideURI", targetPath);
}
return input;
}

@Override
public int filterOrder() {
return 501;
}

@Override
public boolean shouldFilter(HttpRequestMessage msg) {
for (String target: prefixPatterns) {
if (msg.getPath().matches(target)) {
return true;
}
}
return false;
}
}


It extends "HttpInboundSyncFilter", these are filters which handle the request inbound to origin servers. As you can imagine there is a "HttpOutboundSyncFilter" which intercept calls outbound from the origin servers. There is a "HttpInboundFilter" and "HttpOutboundFilter" counterpart to these "sync" filters, they return RxJavaObservable type.

There is a magic string "overrideUri" in my filter implementation. If you are curious about how I found that to be the override uri, it is by scanning through the Zuul2 codebase. There is likely a lot of filters used internally at Netflix which haven't been released for general consumption yet.

With this filter in place, I have bypassed the dynamic groovy scripts loading feature of Zuul2 by explicitly registering my custom filter using this component:

import com.netflix.zuul.filters.FilterRegistry;
import com.netflix.zuul.filters.ZuulFilter;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;

public class FiltersRegisteringService {

private final List<ZuulFilter> filters;
private final FilterRegistry filterRegistry;

@Inject
public FiltersRegisteringService(FilterRegistry filterRegistry, Set<ZuulFilter> filters) {
this.filters = new ArrayList<>(filters);
this.filterRegistry = filterRegistry;
}

public List<ZuulFilter> getFilters() {
return filters;
}

@PostConstruct
public void initialize() {
for (ZuulFilter filter: filters) {
this.filterRegistry.put(filter.filterName(), filter);
}
}
}

I had to make a few more minor tweaks to get this entire set-up with my custom filter bootstrapped, these can be followed in the github repo


Once the Zuul2 sample with this custom filter is started up, the behavior is that any request to /passthrough/messages is routed to a downstream system after the prefix "/passthrough" is stipped out. The instructions to start-up the Zuul 2 app is part of the README of the repo.

This concludes a quick intro to writing a custom Zuul2 filter, I hope this gives just enough of a feel to evaluate Zuul 2.

Tracing a reactive flow - Using Spring Cloud Sleuth with Boot 2

$
0
0
Spring Cloud Sleuth which adds Spring instrumentation support on top of OpenZipkin Brave makes distributed tracing trivially simple for Spring Boot applications. This is a quick write up on what it takes to add support for distributed tracing using this excellent library.

Consider two applications - a client application which uses an upstream service application, both using Spring WebFlux, the reactive web stack for Spring:


My objective is to ensure that flows from user to the client application to the service application can be traced and latencies cleanly recorded for requests.


The final topology that Spring Cloud Sleuth enables is the following:


The sampled trace information from the client and the service app is exported to Zipkin via a queuing mechanism like RabbitMQ.


So what are the changes required to the client and the service app - like I said it is trivially simple! The following libraries need to be pulled in - in my case via gradle:

compile("org.springframework.cloud:spring-cloud-starter-sleuth")
compile("org.springframework.cloud:spring-cloud-starter-zipkin")
compile("org.springframework.amqp:spring-rabbit")

The versions are not specified as they are expected to be pulled in via Spring Cloud BOM and thanks to Spring Gradle Dependency Management plugin:


ext {
springCloudVersion = 'Finchley.RELEASE'
}

apply plugin: 'io.spring.dependency-management'

dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}

And that is actually it, any logs from the application should now start recording the trace and the spans, see how he traceid carried forward in the following logs spanning two different services:

2018-06-22 04:06:28.579  INFO [sample-client-app,c3d507df405b8aaf,c3d507df405b8aaf,true] 9 --- [server-epoll-13] sample.load.PassThroughHandler           : handling message: Message(id=null, payload=Test, delay=1000)
2018-06-22 04:06:28.586 INFO [sample-service-app,c3d507df405b8aaf,829fde759da15e63,true] 8 --- [server-epoll-11] sample.load.MessageHandler : Handling message: Message(id=5e7ba240-f97d-405a-9633-5540bbfe0df1, payload=Test, delay=1000)

Further the Zipkin UI records the exported information and can visually show a sample trace the following way:



This sample is available in my github repository here - https://github.com/bijukunjummen/sleuth-webflux-sample and can be started up easily using docker-compose with all the dependencies wired in.

Jib - Building docker image for a Spring Boot App

$
0
0
I was pleasantly surprised by how easy it was to create a docker image for a sample Spring Boot application using Jib.

Let me first contrast Jib with an approach that I was using before.

I was creating docker images using bmuschko's excellent gradle-docker plugin. Given access to a docker daemon and a gradle dsl based description of the Dockerfile or a straight Dockerfile, it would create the docker image using a gradle task. In my case, the task to create the docker image looks something like this:

task createDockerImage(type: DockerBuildImage) {
inputDir = file('.')
dockerFile = project.file('docker/Dockerfile')
tags = ['sample-micrometer-app:' + project.version]
}

createDockerImage.dependsOn build

and my Dockerfile itself derived off "java:8" base image:

FROM java:8
...

gradle-docker-plugin made it simple to create a docker image right from gradle with the catch that the plugin needs access to a docker daemon to create the image. Also since the base "java:8" image is large the final docker image turns out to be around 705MB on my machine. Again no fault of the gradle-docker plugin but based on my choice of base image.


Now with Jib, all I have to do is to add the plugin:

plugins {
id 'com.google.cloud.tools.jib' version '0.9.6'
}

Configure it to give the image a name:

jib {
to {
image = "sample-micrometer-app:0.0.1-SNAPSHOT"
}
}

And that is it. With a local docker daemon available, I can create my docker image using the following task:


./gradlew jibDockerBuild

Jib automatically selects a very lightweight base image - my new image is just about 150 MB in size.

If I had access to a docker registry available then the local docker daemon is not required, it can directly create and publish the image to a docker registry!

Jib gradle plugin provides an interesting task - "jibExportDockerContext" to export out the docker file, this way if needed a docker build can be run using this Dockerfile, for my purposes I wanted to see the contents of this file and it looks something like this:

FROM gcr.io/distroless/java

COPY libs /app/libs/
COPY resources /app/resources/
COPY classes /app/classes/

ENTRYPOINT ["java","-cp","/app/libs/*:/app/resources/:/app/classes/","sample.meter.SampleServiceAppKt"]


All in all, a very smooth experience and Jib does live up to its goals. My sample project with jib integrated with a gradle build is available here.


"Knative Serving" for Spring Boot Applications

$
0
0
I got a chance to try Knative'sServing feature to deploy a Spring Boot application and this post is simply documenting a sample and the approach I took.

I don't understand the internals of Knative enough yet to have an opinion on whether this approach is better than the deployment + services + ingress based approach.

One feature that is awesome is the auto-scaling feature in Knative Serving, which based on the load, increases/decreases the number of pods as part of a "Deployment" handling the request.

Details of the Sample


My entire sample is available here and it is mostly developed based on the java sample available with Knative Serving documentation. I used Knative with a minikube environment to try the sample.


Deploying to Kubernetes/Knative

Assuming that a Kubernetes environment with Istio and Knative has been set-up, the way to run the application is to deploy a Kubernetes manifest this way:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: sample-boot-knative-service
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: bijukunjummen/sample-boot-knative-app:0.0.1-SNAPSHOT

The image "bijukunjummen/sample-boot-knative-app:0.0.1-SNAPSHOT" is publicly available via Dockerhub, so this sample should work out of the box.

Applying this manifest:

kubectl apply -f service.yml

should register a Knative Serving Service resource with Kubernetes, the Knative serving services resource manages the lifecycle of other Knative resources (configuration, revision, route) the details of which can be viewed using the following commands, if anything goes wrong, the details should show up in the output:

kubectl get services.serving.knative.dev sample-boot-knative-service -o yaml

Testing

Assuming that the Knative serving service is deployed cleanly, the first oddity to see is that no pods show up for the application!


If I were to make a request to the app now, which is done via a routing layer managed by Knative - this can be retrieved for a minikube environment using the following bash script:

export GATEWAY_URL=$(echo $(minikube ip):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'))
export APP_DOMAIN=$(kubectl get services.serving.knative.dev sample-boot-knative-service -o="jsonpath={.status.domain}")

and making a call to an endpoint of the app using CUrl:

curl -X "POST""http://${GATEWAY_URL}/messages" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Host: ${APP_DOMAIN}" \
-d $'{
"id": "1",
"payload": "one",
"delay": "300"
}'
OR httpie

http http://${GATEWAY_URL}/messages Host:"${APP_DOMAIN}" id=1 payload=test delay=100

should magically, using the auto-scaler component start spinning up the pods to handle the request:


The first request took almost 17 seconds to complete, the time it takes to spin up a pod, but subsequent requests are quick.

Now, to show the real power of auto-scaler I ran a small load test with a 50 user load and pods are scaled up and down as required.



Conclusion

I can see the promise of Knative in automatically managing the resources, once defined using a fairly simple manifest, in a Kubernetes environment and letting a developer focus on the code and logic.

Knative Serving - Service to Service call

$
0
0
In a previous post I had covered using Knative'sServing feature to run a sample Java Application. This post will be go into the steps to deploy two applications, with one application calling the other.





Details of the Sample

The entire sample is available at my github repository - https://github.com/bijukunjummen/sleuth-webflux-sample.

The applications are Spring Boot based. The backend application exposes an endpoint "/messages" when invoked with a payload which looks like this:

{
"delay": "0",
"id": "123",
"payload": "test",
"throw_exception": "true"
}

would respond after the specified delay. If the payload has the "throw_exception" flag set to true, then it would return a 5XX after the specified delay.

The client application exposes a "/passthrough/messages" endpoint, which takes in the exact same payload and simply forwards it to the backend application. The url to the backend app is passed to the client app as a "LOAD_TARGET_URL" environment property.



Deploying as a Knative Serving service

The subfolder to this project - knative, holds the manifest for deploying the Knative serving Service for the 2 applications. The backend application's knative service manifest looks like this:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: sample-backend-app
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: bijukunjummen/sample-backend-app:0.0.1-SNAPSHOT
env:
- name: VERSION
value: "0.0.2-SNAPSHOT"
- name: SERVER_PORT
value: "8080"

The client app has to point to the backend service and is specified in the specs:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: sample-client-app
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: bijukunjummen/sample-client-app:0.0.2-SNAPSHOT
env:
- name: VERSION
value: "0.0.1-SNAPSHOT"
- name: LOAD_TARGET_URL
value: http://sample-backend-app.default.svc.cluster.local
- name: SERVER_PORT
value: "8080"


The domain "sample-backend-app.default.svc.cluster.local", points to the dns name of the backend service created by the Knative serving service resource


Testing

It was easier for me to simply create a small video with how I tested this:



As in my previous post, the request to the application is via the knative ingress gateway, the url to which can be obtained the following way(for a minikube environment):

export GATEWAY_URL=$(echo $(minikube ip):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'))

And a sample request made the following way, note that the routing in the Gateway is via the host header, in this instance "sample-client-app.default.example.com":

export CLIENT_DOMAIN=$(kubectl get services.serving.knative.dev sample-client-app  -o="jsonpath={.status.domain}")

http http://${GATEWAY_URL}/passthrough/messages Host:"${CLIENT_DOMAIN}" id=1 payload=test delay=100 throw_exception=false


Knative serving - using Ambassador gateway

$
0
0
This is a continuation of my experimentation with Knative serving, this time around building a gateway on top of a Knative serving applications. This builds on two of my previous posts - on using Knative to deploy a Spring Boot App and making a service to service call in Knative.

Why a Gateway on top of Knative application


To explain this let me touch on my previous blog post. Assuming that Knative serving is already available in a Kubernetes environment, the way to deploy an application is using a manifest which looks like this:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: sample-boot-knative-service
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: bijukunjummen/sample-boot-knative-app:0.0.3-SNAPSHOT
env:
- name: ASAMPLE_ENV
value: "sample-env-val"


Now to invoke this application, I have to make the call via an ingress created by Knative serving, which can be obtained the following way in a minikube environment:

export GATEWAY_URL=$(echo $(minikube ip):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'))

The request now has to go through the ingress and the ingress uses a Host http header to then route the request to the app. The host header for the deployed service can be obtained using the following bash script:

export APP_DOMAIN=$(kubectl get services.serving.knative.dev sample-boot-knative-service  -o="jsonpath={.status.domain}")

and then a call via the knative ingress gateway made the following way, using CURL:

curl -X "POST""http://${GATEWAY_URL}/messages" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Host: ${APP_DOMAIN}" \
-d $'{
"id": "1",
"payload": "one",
"delay": "300"
}'

or using httpie:

http http://${GATEWAY_URL}/messages Host:"${APP_DOMAIN}" id=1 payload=test delay=1

There are too many steps involved in making a call to the application via the knative ingress:



My objective in this post is to simplify the users experience in making a call to the app by using a Gateway like Ambassador.


Integrating Ambassador to Knative


There is nothing special about installing Ambassador into a Knative environment, the excellent instructions provided here worked cleanly in my minikube environment.

Now my objective with the gateway is summarized in this picture:


With Ambassador in place, all the user has to do is to send a request to Ambassador Gateway and it would take care of plugging in the Host header before making a request to the Knative Ingress.

So how does this work, fairly easily! assuming Ambassador is in place, all it needs is a configuration which piggybacks on a Kubernetes service the following way:

---
apiVersion: v1
kind: Service
metadata:
name: sample-knative-app-gateway
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: sample-boot-knative-app
prefix: /messages
rewrite: /messages
service: knative-ingressgateway.istio-system.svc.cluster.local
host_rewrite: sample-boot-knative-service.default.example.com
spec:
type: LoadBalancer
ports:
- name: ambassador
port: 80
targetPort: 80
selector:
service: ambassador

Here I am providing configuration via a Service annotations, intercepting any calls to /messages uri and forwarding these request to the knative ingressgatway service (knative-ingressgateway.istio-system.svc.cluster.local) and adding the host header of "sample-boot-knative-service.default.example.com".


Now the interaction from a user perspective is far simpler, all I have to do is to get the url for this new service and to make the api call, in a minikube environment using the following bash script:

export AMB_URL=$(echo $(minikube ip):$(kubectl get svc sample-knative-app-gateway -n default -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'))

http http://${AMB_URL}/messages id=1 payload=test delay=1


It may be easier to try this on a real code, which is available my github repo here.

Helm chart to deploy and scale a generic app image

$
0
0
This is a post about a simple helm chart that I have worked on to deploy any generic app image to Kubernetes. The chart is available here(https://github.com/bijukunjummen/generic-app-chart).

It tries to solve the issue of having to manage a set of raw Kubernetes resources(deployment, secrets, hpa) and instead letting helm manage these resources. The chart is generic enough that it should be able to handle most 12-factor compliant app images.


Consider a simple app that I have here - https://github.com/bijukunjummen/sample-boot-knative, an image for this application is publicly available on dockerhub - https://hub.docker.com/r/bijukunjummen/sample-boot-knative-app/

If I wanted to deploy this app in a Kubernetes cluster, a set of specs to create a Kubernetes Deployment and a service is available here - https://github.com/bijukunjummen/sample-boot-knative/tree/master/kube. This is a simple enough deployment, however, the specs can get complicated once configuration, secrets are layered in and if features like Horizontal scaling is required.

Usage


There is a good documentation in the README of the chart, I will be mostly repeating that information here. I have hosted a version of the chart as a chart repository using github-pages, the first step to using the chart is to add this repo to your list of helm repos:

helm repo add bk-charts http://bijukunjummen.github.io/generic-app-chart
helm repo update

The chart should now show up if searched for:

helm search generic-app-chart


The chart requires the details of the application that is being deployed, which can be provided as a yaml the following way:

app:
healthCheckPath: /actuator/info
environment:
SERVER_PORT: "8080"
ENV_NAME: ENV_VALUE
secrets:
SECRET1: SECRET_VALUE1
autoscaling:
enabled: true
maxReplicas: 2
minReplicas: 1
targetCPUUtilizationPercentage: 40

image:
repository: bijukunjummen/sample-boot-knative-app
tag: 0.0.3-SNAPSHOT

ingress:
enabled: true
path: /
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"

resources:
constraints:
enabled: true
requests:
cpu: 500m


At a minimum, the only details that are required are the application image and the tag, the rest of the details is just for illustration of what is feasible.

To deploy the app, run the following command:

helm install bk-charts/generic-app-chart  --name my-app --values sample-values.yaml

and a bunch of Kubernetes resources should show up supporting this application.


App upgrades are simple, facilitated by helm:

helm upgrade my-app bk-charts/generic-app-chart -f sample-values.yaml


Conclusion

The chart is fairly minimal at this point and creates a small set of Kubernetes resources - a secrets to hold secrets!, a deployment, a service, an hpa to scale the app, which suffices the kind of use cases that I have encountered so far.

Reactive Spring Webflux with AWS DynamoDB

$
0
0
AWS has released AWS SDK for Java version 2, the SDK now supports non-blocking IO for the API calls of different AWS services. In this post I will be exploring using the DynamoDB API's of the AWS SDK 2.x and using Spring Webflux stack to expose a reactive endpoint - this way the application is reactive end to end and presumably should use resources very efficiently (I have plans to do some tests on this set-up as a follow up).


Details of the Application


It may be easier to simply look at the code and follow it there - it is available in my GitHub repo.

The application is a simple one - to perform CRUD operation on a Hotel entity represented using the following Kotlin code:

data class Hotel(
val id: String = UUID.randomUUID().toString(),
val name: String? = null,
val address: String? = null,
val state: String? = null,
val zip: String? = null
)

I want to expose endpoints to save and retrieve a hotel entity and to get the list of hotels by state.


Details of the AWS SDK 2


The package names of the AWS SDK 2 api's all start with "software.amazon.awssdk" prefix now, the client to interact with DynamoDB is created using code along these lines:

import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient

val client: DynamoDbAsyncClient = DynamoDbAsyncClient.builder()
.region(Region.of(dynamoProperties.region))
.credentialsProvider(DefaultCredentialsProvider.builder().build())
.build()


Once the DynamoDbAsyncClient instance is created, any operation using this client returns a Java 8 CompletableFuture type. For eg. in saving a Hotel entity:

val putItemRequest = PutItemRequest.builder()
.tableName("hotels")
.item(HotelMapper.toMap(hotel))
.build()

val result: CompletableFuture<PutItemResponse> =
dynamoClient.putItem(putItemRequest)

and in retrieving a record by id:

val getItemRequest: GetItemRequest = GetItemRequest.builder()
.key(mapOf(Constants.ID to AttributeValue.builder().s(id).build()))
.tableName(Constants.TABLE_NAME)
.build()

val response: CompletableFuture<GetItemResponse> = dynamoClient.getItem(getItemRequest)

CompletableFuture provides a comprehensive set of functions to transform the results when available.

Integrating with Spring Webflux

Spring Webflux is a reactive web framework. The non-blocking IO support in AWS SDK 2 now makes it possible to write an end to end reactive and non-blocking applications with DynamoDB. Spring Webflux uses reactor-core to provide reactive streams support and the trick to integrating with AWS SDK 2 is to transform the Java 8 CompletableFuture to a reactor-core type, the following way when retrieving an item from DynamoDB by id:

val getItemRequest: GetItemRequest = GetItemRequest.builder()
.key(mapOf(Constants.ID to AttributeValue.builder().s(id).build()))
.tableName(Constants.TABLE_NAME)
.build()

return Mono.fromCompletionStage(dynamoClient.getItem(getItemRequest))
.map { resp ->
HotelMapper.fromMap(id, resp.item())
}

Spring Webflux expects the return types of the different web endpoint method signatures to be of reactive types, so a typical endpoint for getting say a list of hotels is the following:

@RequestMapping(value = ["/hotels"], method = [RequestMethod.GET])
fun getHotelsByState(@RequestParam("state") state: String): Flux<Hotel> {
return hotelRepo.findHotelsByState(state)
}

Spring Webflux also supports a functional way to describe the API of the application, so an equivalent API to retrieve a hotel by its id, but expressed as a functional DSL is the following:

@Configuration
class HotelAdditionalRoutes {

@Bean
fun routes(hotelRepo: HotelRepo) = router {
GET("/hotels/{id}") { req ->
val id = req.pathVariable("id")
val response: Mono<ServerResponse> = hotelRepo.getHotel(id)
.flatMap { hotel ->
ServerResponse.ok().body(BodyInserters.fromObject(hotel))
}
response.switchIfEmpty(ServerResponse.notFound().build())
}
}
}


Conclusion

AWS SDK 2 makes it simple to write an end to end reactive and non-blocking applications. I have used Spring Webflux and AWS SDK 2 dynamo client to write such an application here. The entire working sample is available in my GitHub repo - https://github.com/bijukunjummen/boot-with-dynamodb, and has instructions on how to start up a local version of DynamoDB and use it for testing the application.


Unit testing DynamoDB applications using JUnit5

$
0
0
In a previous post I had described the new AWS SDK for Java 2 which provides non-blocking IO support for Java clients calling different AWS services. In this post I will go over an approach that I have followed to unit test the AWS DynamoDB calls.

There are a few ways to spin up a local version of DynamoDB -

1. AWS provides a DynamoDB local
2. Localstack provides a way to spin up a good number of AWS services locally
3. A docker version of DynamoDB Local
4. Dynalite, a node based implementation of DynamoDB


Now to be able to unit test an application, I need to be able to start up an embedded version of DynamoDB using one of these options right before a test runs and then shut it down after a test completes. There are three approaches that I have taken:

1. Using a JUnit 5 extension that internally brings up a AWS DynamoDB Local and spins it down after a test.
2. Using testcontainers to start up a docker version DynamoDB Local
3. Using testcontainers to start up DynaLite

JUnit5 extension

JUnit5 extension provides a convenient hook point to start up an embedded version of DynamoDB for tests. It works by pulling in a version of DynamoDB Local as a maven dependency:

dependencies {
...
testImplementation("com.amazonaws:DynamoDBLocal:1.11.119")
...
}

A complication with this dependency is that there are native components (dll, .so etc) that the DynamoDB Local interacts with and to get these in the right place, I depend on a Gradle task:

task copyNativeDeps(type: Copy) {
mkdir "build/native-libs"
from(configurations.testCompileClasspath) {
include '*.dll'
include '*.dylib'
include '*.so'
}
into 'build/native-libs'
}

test {
dependsOn copyNativeDeps
}

which puts the native libs in build/native-libs folder, and the extension internally sets this path as a system property:

System.setProperty("sqlite4java.library.path", libPath.toAbsolutePath().toString())

Here is the codebase to the JUnit5 extension with all these already hooked up - https://github.com/bijukunjummen/boot-with-dynamodb/blob/master/src/test/kotlin/sample/dyn/rules/LocalDynamoExtension.kt

A test using this extension looks like this:

class HotelRepoTest {
companion object {
@RegisterExtension
@JvmField
val localDynamoExtension = LocalDynamoExtension()

@BeforeAll
@JvmStatic
fun beforeAll() {
val dbMigrator = DbMigrator(localDynamoExtension.syncClient!!)
dbMigrator.migrate()
}

}
@Test
fun saveHotel() {
val hotelRepo = DynamoHotelRepo(localDynamoExtension.asyncClient!!)
val hotel = Hotel(id = "1", name = "test hotel", address = "test address", state = "OR", zip = "zip")
val resp = hotelRepo.saveHotel(hotel)

StepVerifier.create(resp)
.expectNext(hotel)
.expectComplete()
.verify()
}
}

The code can interact with a fully featured DynamoDB.

TestContainers with DynamoDB Local Docker


The JUnit5 extensions approach works well but it requires an additional dependency with native binaries to be pulled in. A cleaner approach may be to use the excellent Testcontainers to spin up a docker version of DynamoDB Local the following way:

class HotelRepoLocalDynamoTestContainerTest {
@Test
fun saveHotel() {
val hotelRepo = DynamoHotelRepo(getAsyncClient(dynamoDB))
val hotel = Hotel(id = "1", name = "test hotel", address = "test address", state = "OR", zip = "zip")
val resp = hotelRepo.saveHotel(hotel)

StepVerifier.create(resp)
.expectNext(hotel)
.expectComplete()
.verify()
}



companion object {
val dynamoDB: KGenericContainer = KGenericContainer("amazon/dynamodb-local:1.11.119")
.withExposedPorts(8000)

@BeforeAll
@JvmStatic
fun beforeAll() {
dynamoDB.start()
}

@AfterAll
@JvmStatic
fun afterAll() {
dynamoDB.stop()
}

fun getAsyncClient(dynamoDB: KGenericContainer): DynamoDbAsyncClient {
val endpointUri = "http://" + dynamoDB.getContainerIpAddress() + ":" +
dynamoDB.getMappedPort(8000)
val builder: DynamoDbAsyncClientBuilder = DynamoDbAsyncClient.builder()
.endpointOverride(URI.create(endpointUri))
.region(Region.US_EAST_1)
.credentialsProvider(StaticCredentialsProvider
.create(AwsBasicCredentials
.create("acc", "sec")))
return builder.build()
}

...
}
}

This code starts up DynamoDB at a random unoccupied port and provides this information so that the client can be created using this information. There is a little Kotlin workaround that I had to do based on an issue reported here - https://github.com/testcontainers/testcontainers-java/issues/318


TestContainers with Dynalite


Dynalite is a javascript based implementation of DynamoDB and can be run for tests again using the TestContainer approach. This time however there is already a TestContainer module for Dynalite. I found that it does not support JUnit5 and sent a Pull request to provide this support, in the iterim the raw docker image can be used and this is how a test looks like:

class HotelRepoDynaliteTestContainerTest {
@Test
fun saveHotel() {
val hotelRepo = DynamoHotelRepo(getAsyncClient(dynamoDB))
val hotel = Hotel(id = "1", name = "test hotel", address = "test address", state = "OR", zip = "zip")
val resp = hotelRepo.saveHotel(hotel)

StepVerifier.create(resp)
.expectNext(hotel)
.expectComplete()
.verify()
}

companion object {
val dynamoDB: KGenericContainer = KGenericContainer("quay.io/testcontainers/dynalite:v1.2.1-1")
.withExposedPorts(4567)

@BeforeAll
@JvmStatic
fun beforeAll() {
dynamoDB.start()
val dbMigrator = DbMigrator(getSyncClient(dynamoDB))
dbMigrator.migrate()
}

@AfterAll
@JvmStatic
fun afterAll() {
dynamoDB.stop()
}

fun getAsyncClient(dynamoDB: KGenericContainer): DynamoDbAsyncClient {
val endpointUri = "http://" + dynamoDB.getContainerIpAddress() + ":" +
dynamoDB.getMappedPort(4567)
val builder: DynamoDbAsyncClientBuilder = DynamoDbAsyncClient.builder()
.endpointOverride(URI.create(endpointUri))
.region(Region.US_EAST_1)
.credentialsProvider(StaticCredentialsProvider
.create(AwsBasicCredentials
.create("acc", "sec")))
return builder.build()
}
...
}
}

Conclusion

All of the approaches are useful in being able to test integration with DynamoDB. My personal preference is using the TestContainers approach if a docker agent is available else with the JUnit5 extension approach. The samples with fully working tests using all the three approaches are available in my github repo - https://github.com/bijukunjummen/boot-with-dynamodb
Viewing all 250 articles
Browse latest View live