Contents

How to serialize case class to Json in Scala 3 & Scala 2 using Circe

Adam Rybicki

24 May 2023.13 minutes read

How to serialize case class to Json in Scala 3 & Scala 2 using Circe webp image

In my previous blog post, I have shown how to serialize Scala’s case class and ADT with standard camelCase convention into JSON with snake_case fields using uPickle. This time I want to show how to achieve the same result but with a more popular library and in two versions of Scala - 2 & 3.

Configuration & Necessary dependencies
For Scala 2:

"com.softwaremill.sttp.client3" %% "core" % "3.8.15"
"com.softwaremill.sttp.client3" %% "circe" % "3.8.15"
"io.circe" %% "circe-core" % "0.14.3"
"io.circe" %% "circe-generic" % "0.14.3"
"io.circe" %% "circe-parser" % "0.14.3"
"io.circe" %% "circe-generic-extras" % "0.14.3"

We also need to add a flag Ymacro-annotations into compiler options for io.circe.generic-extras in the build.sbt:

.settings(
 scalacOptions ++= Seq("-Ymacro-annotations")
)

For Scala 3:

"com.softwaremill.sttp.client3" %% "core" % "3.8.15"
"com.softwaremill.sttp.client3" %% "circe" % "3.8.15"
"io.circe" %% "circe-core" % "0.14.5"
"io.circe" %% "circe-parser" % "0.14.5"

In our example, we’ll work with sttp-openai, a type-safe Scala library for accessing OpenAI services like ChatGPT.
Let’s look at the Completions endpoint, which returns one or more predicted completions for the given prompt (check this link) and that we’re implementing (check this link).

The sttp-openai library defines the following model for the request body:

case class CompletionsBody(
   model: String,
   prompt: Option[Prompt] = None,
   suffix: Option[String] = None,
   maxTokens: Option[Int] = None,
   temperature: Option[Double] = None,
   topP: Option[Double] = None,
   n: Option[Int] = None,
   logprobs: Option[Int] = None,
   echo: Option[Boolean] = None,
   stop: Option[Stop] = None,
   presencePenalty: Option[Double] = None,
   frequencyPenalty: Option[Double] = None,
   bestOf: Option[Int] = None,
   logitBias: Option[Map[String, Float]] = None,
   user: Option[String] = None
)

sealed trait Prompt
case class SinglePrompt(value: String) extends Prompt
case class MultiplePrompt(values: Seq[String]) extends Prompt

sealed trait Stop
case class SingleStop(value: String) extends Stop
case class MultipleStop(values: Seq[String]) extends Stop

Prompt can be represented as a single value of a String or an Array of Strings. The same goes for Stop.

And implementation of Response

case class CompletionsResponse(
   id: String,
   `object`: String,
   created: Int,
   model: String,
   choices: Seq[Choices],
   usage: Usage
)
case class Choices(
    text: String,
    index: Int,
    logprobs: Option[String],
    finishReason: String
)

case class Usage(
    promptTokens: Int, 
    completionTokens: Int, 
    totalTokens: Int
)

We aim to serialize CompletionsBody with camelCase fields to JSON with snake_case keys and deserialize JSON response with snake_case keys from OpenAI into CompletionsResponse with camelCase fields.

{
  "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
  "object": "text_completion",
  "created": 1589478378,
  "model": "text-davinci-003",
  "choices": [
    {
    "text": "\n\nThis is indeed a test",
    "index": 0,
    "logprobs": null,
    "finish_reason": "length"
    }
  ],
  "usage": {
    "prompt_tokens": 5,
    "completion_tokens": 7,
    "total_tokens": 12
  }
}

Working with Scala 3

To use our model with sttp, we have to provide a way to serialize or, in other words, encode CompletionsBody and deserialize or decode CompletionsResponse.

Sttp’s body method

def body[B: BodySerializer](b: B): RequestT[U, T, R] =
 withBody(implicitly[BodySerializer[B]].apply(b))

accepts as an implicit parameter BodySerializer[B] that is provided by
sttp.client3.circe.SttpCirceApi trait

implicit def circeBodySerializer[B](implicit
   encoder: Encoder[B],
   printer: Printer = Printer.noSpaces
): BodySerializer[B] =
 b => StringBody(encoder(b).printWith(printer), Utf8, MediaType.ApplicationJson)

circeBodySerializer method accepts two implicit parameters. One is io.circe.Encoder[B], which we must provide for our body class Encoder[CompletionsBody], and the other is Printer, provided by default.

In the same Trait, there is a method

def asJson[B: Decoder: IsOption]: ResponseAs[Either[ResponseException[String, io.circe.Error], B]] =
asString.mapWithMetadata(ResponseAs.deserializeRightWithError(deserializeJson)).showAsJson

That is used in sttp’s response method

/** Specifies the target type to which the response body should be read. Note that this replaces any previous
 * specifications, which also include any previous `mapResponse` invocations.
 */
def response[T2](ra: ResponseAs[T2]): Request[T2] = copy[T2](response = ra)

As we can see, asJson method requires generic parameter B to have an instance of circe’s decoder trait Decoder[A], which we will also have to provide for our response class Decoder[CompletionsResponse]

Starting from the request’s body. To create an Encoder for CompletionsBody, we can make CompletionsBody derive from the ConfiguredEncoder trait.

trait ConfiguredEncoder[A](using conf: Configuration) extends Encoder.AsObject[A]: 

That requires providing an implicit configuration accessible in the class’ scope

given Configuration = Configuration.default.withSnakeCaseMemberNames

According to documentation:

Configuration allowing customization of the JSON produced when encoding, or expected when decoding.

object CompletionsRequestBody {

  given Configuration = Configuration.default.withSnakeCaseMemberNames

  case class CompletionsBody(
     model: String,
     prompt: Option[Prompt] = None,
     suffix: Option[String] = None,
     maxTokens: Option[Int] = None,
     temperature: Option[Double] = None,
     topP: Option[Double] = None,
     n: Option[Int] = None,
     logprobs: Option[Int] = None,
     echo: Option[Boolean] = None,
     stop: Option[Stop] = None,
     presencePenalty: Option[Double] = None,
     frequencyPenalty: Option[Double] = None,
     bestOf: Option[Int] = None,
     logitBias: Option[Map[String, Float]] = None,
     user: Option[String] = None
  ) derives ConfiguredEncoder
}

And that’s basically it. The only thing left in CompletionsBody to do is to provide Encoders for both Prompt and Stop traits.

Prompt and Stop can be represented in two possible ways, so we must define custom ways to encode them.

object Prompt {
 given Encoder[Prompt] = {
   case SinglePrompt(value)    => Json.fromString(value)
   case MultiplePrompt(values) => Json.arr(values.map(Json.fromString): _*)
 }
}

object Stop {
 given Encoder[Stop] = {
   case SingleStop(value)    => Json.fromString(value)
   case MultipleStop(values) => Json.arr(values.map(Json.fromString): _*)
 }
}

An implementation of CompletionsBody presents

import io.circe.{Encoder, Json}
import io.circe.derivation.{Configuration, ConfiguredEncoder}

object CompletionsRequestBody {

 given Configuration = Configuration.default.withSnakeCaseMemberNames
 case class CompletionsBody(
     model: String,
     prompt: Option[Prompt] = None,
     suffix: Option[String] = None,
     maxTokens: Option[Int] = None,
     temperature: Option[Double] = None,
     topP: Option[Double] = None,
     n: Option[Int] = None,
     logprobs: Option[Int] = None,
     echo: Option[Boolean] = None,
     stop: Option[Stop] = None,
     presencePenalty: Option[Double] = None,
     frequencyPenalty: Option[Double] = None,
     bestOf: Option[Int] = None,
     logitBias: Option[Map[String, Float]] = None,
     user: Option[String] = None
 ) derives ConfiguredEncoder

 sealed trait Prompt
 object Prompt {
   given Encoder[Prompt] = {
     case SinglePrompt(value)    => Json.fromString(value)
     case MultiplePrompt(values) => Json.arr(values.map(Json.fromString): _*)
   }
 }

 case class SinglePrompt(value: String) extends Prompt
 case class MultiplePrompt(values: Seq[String]) extends Prompt

 sealed trait Stop
 object Stop {
   given Encoder[Stop] = {
     case SingleStop(value)    => Json.fromString(value)
     case MultipleStop(values) => Json.arr(values.map(Json.fromString): _*)
   }
 }

 case class SingleStop(value: String) extends Stop
 case class MultipleStop(values: Seq[String]) extends Stop
}

Then we prepare a request method. What I like to do before I start sending requests to the services (especially the ones that cost $ per sent request), is to check how they are formatted.

import sttp.client3.*
import sttp.client3.circe.circeBodySerializer
import io.circe.syntax.*
import sttp.model.Uri
import sttp.openai.CompletionsRequestBody.CompletionsBody

class OpenAI(authToken: String) {

def createCompletion(completionBody: CompletionsBody): String = {
   val jsonBody = completionBody.asJson

   openAIAuthRequest
     .post(OpenAIUris.Completions)
     .body(jsonBody)
     .toCurl
 }

private val openAIAuthRequest: RequestT[Empty, Either[String, String], Any]  = basicRequest.auth
 .bearer(authToken)

}

private object OpenAIUris {
 val Completions: Uri = uri"https://api.openai.com/v1/completions"
}

import sttp.openai.OpenAI
import sttp.openai.CompletionsRequestBody.*

object Main extends App {

 val openAI = new OpenAI("secret-api-key")

 val body = CompletionsBody(
   "text-davinci-003",
   prompt = Some(MultiplePrompt(Seq("multiple prompt", "multiple prompt x 2"))),
   stop = Some(MultipleStop(Seq("multiple stop", "multiple stop x 2")))
 )

 val curl = openAI.createCompletion(body)
 println(curl)
}

We see that our JSON has been created with all of its fields and those which were set to be empty (Option.None), were filled with null values.

{
    "model":"text-davinci-003",
    "prompt":[
        "multiple prompt",
        "multiple prompt x 2"
    ],
    "suffix":null,
    "max_tokens":null,
    "temperature":null,
    "top_p":null,
    "n":null,
    "logprobs":null,
    "echo":null,
    "stop":[
        "multiple stop",
        "multiple stop x 2"
    ],
    "presence_penalty":null,
    "frequency_penalty":null,
    "best_of":null,
    "logit_bias":null,
    "user":null
}

The problem with it is that, once we send it to the server. OpenAI will respond with an error, like the one below:

sttp.client4.HttpError: statusCode: 400, response: {
  "error": {
    "message": "None is not of type 'object' - 'logit_bias'",
    "type": "invalid_request_error",
    "param": null,
    "code": null
  }
}

To fix it, we will have to get rid of all the null values in the JSON body. That can be achieved by adding deepDropNullValues where we build JSON from CompletionsBody

def createCompletion(completionBody: CompletionsBody): String = {
 val jsonBody = completionBody.asJson.deepDropNullValues

 openApiAuthRequest
   .post(OpenAIUris.Completions)
   .body(jsonBody)
   .toCurl
}

After the change, the content of our JSON looks like this:

{
    "model":"text-davinci-003",
    "prompt":[
        "multiple prompt",
        "multiple prompt x 2"
    ],
    "stop":[
        "multiple stop",
        "multiple stop x 2"
    ]
}

Now the only thing left is to build CompletionsResponse

Since the JSON response will have keys in snake_case convention, we have to provide Configuration, then make CompletionsResponse class, and its ADTs derive from ConfiguredDecoder trait, as we did in the CompletionsBody example.

trait ConfiguredDecoder[A](using conf: Configuration) extends Decoder[A]:
import io.circe.Decoder
import io.circe.derivation.{Configuration, ConfiguredDecoder}

object CompletionsResponse {
 given Configuration = Configuration.default.withSnakeCaseMemberNames

 case class CompletionsResponse(
     id: String,
     `object`: String,
     created: Int,
     model: String,
     choices: Seq[Choices],
     usage: Usage
 ) derives ConfiguredDecoder

 case class Choices(
     text: String,
     index: Int,
     logprobs: Option[String],
     finishReason: String
 ) derives ConfiguredDecoder

 case class Usage(promptTokens: Int, completionTokens: Int, totalTokens: Int) derives ConfiguredDecoder

}

Then we finish our request method

import sttp.client3.*
import sttp.client3.circe.circeBodySerializer
import sttp.client3.circe.asJson
import io.circe.syntax.*
import sttp.model.Uri
import sttp.openai.CompletionsRequestBody.CompletionsBody
import sttp.openai.CompletionsResponse.CompletionsResponse

class OpenAI(authToken: String) {

 def createCompletion(completionBody: CompletionsBody): RequestT[Identity, Either[ResponseException[String, io.circe.Error], CompletionsResponse], Any] = {

   val json = completionBody.asJson.deepDropNullValues

   openApiAuthRequest
     .post(OpenAIUris.Completions)
     .body(json)
     .response(asJson[CompletionsResponse])
 }

 private val openApiAuthRequest: RequestT[Empty, Either[String, String], Any] = basicRequest.auth
   .bearer(authToken)

}

private object OpenAIUris {
 val Completions: Uri = uri"https://api.openai.com/v1/completions"
}

And check for the response

import io.circe.*
import sttp.client3.*
import sttp.openai.CompletionsRequestBody.*
import sttp.openai.OpenAI

object Main extends App {
 val backend = HttpClientSyncBackend()

 val openAI = new OpenAI("secret-api-key")

 val body = CompletionsBody(
   "text-davinci-003",
   prompt = Some(MultiplePrompt(Seq("multiple prompt", "multiple prompt x 2"))),
   stop = Some(MultipleStop(Seq("multiple stop", "multiple stop x 2")))
 )

 val response = openAI.createCompletion(body).send(backend)
 println(response)
}
Response(Right(CompletionsResponse(cmpl-7EdZvuy3rRtQBj1QXrVd0aTHLUeUx,
text_completion,1683723087,text-davinci-003,List(Choices( support

Prompt support for accessing and managing different systems is called multi-,0,None,length), Choices(

Q: What's your name?
A: My name is John,1,None,length)),Usage(6,32,38))),200,,
List(cache-control: no-cache, must-revalidate, x-ratelimit-reset-tokens: 
12ms, access-control-allow-origin: *, 
x-request-id: ae3ce088f80a79ec0d121963ada7e9d5, 
openai-version: 2020-10-01, openai-processing-ms: 806, 
x-ratelimit-limit-requests: 60, 
x-ratelimit-remaining-requests: 59, date: Wed, 10 May 2023 12:51:28 GMT, 
openai-organization: sml-z2a9cc, alt-svc: h3=":443"; ma=86400, h3-29=":443"; 
ma=86400, content-type: application/json, openai-model: text-davinci-003, 
server: cloudflare, x-ratelimit-limit-tokens: 150000, 
cf-cache-status: DYNAMIC, 
content-encoding: gzip, cf-ray: 7c525052196ec00d-WAW, 
x-ratelimit-remaining-tokens: 149967, :status: 200, 
x-ratelimit-reset-requests: 1s, strict-transport-security: max-age=15724800; 
includeSubDomains),List(),
RequestMetadata(POST,https://api.openai.com/v1/completions,
Vector(Accept-Encoding: gzip, deflate, Authorization: ***, 
Content-Type: application/json; charset=utf-8)))

It works as expected.

Explicit Configuration

Worth to note that the above solution is not well documented, and the information stated in https://github.com/circe/circe/pull/1800 says that to create an Encoder and Decoder with a specific implicit configuration accessible in scope, a given case class should derive from Encoder.AsObject, Decoder or Codec.AsObject.

The other way to create Encoders and Decoderswith Configuration is to create them manually and pass the configuration explicitly.

To do so, start from request’s body. To create Encoder for CompletionsBody, we can use io.circe.Encoder.AsObject, which provides us with the derived method:

inline final def derived[A](using inline A: Mirror.Of[A]):
Encoder.AsObject[A] =
  ConfiguredEncoder.derived[A](using Configuration.default)

That will only do part of the work because by doing so, we will serialize a case class into JSON, but the keys will be in camelCase. To have them changed into snake_case, we have to provide Configuration 

So we create a configuration

private val config = Configuration.default.withSnakeCaseMemberNames

and then pass it into the previously created Encoder. The problem is, if we look at the derived method, we can see that it doesn’t accept any other parameter than Mirror.Of[A].

In order to pass the configuration into Encoder, we have to create one using ConfiguredEncoder from io.circe.derivation

object ConfiguredEncoder:
 inline final def derived[A](using conf: Configuration)(using inline mirror: Mirror.Of[A]): ConfiguredEncoder[A] = . . .

So we are left with

object CompletionsBody { 
    given Encoder[CompletionsBody] = ConfiguredEncoder.derived(using config)
}

That will change class fields into snake_case keys upon encoding it to JSON.

Similarly we create Decoders for CompletionsResponse

package sttp.openai

import io.circe.Decoder
import io.circe.derivation.{Configuration, ConfiguredDecoder}

object CompletionsResponse {
 private val config: Configuration = Configuration.default.withSnakeCaseMemberNames

 case class CompletionsResponse(
     id: String,
     `object`: String,
     created: Int,
     model: String,
     choices: Seq[Choices],
     usage: Usage
 )

 object CompletionsResponse {
   given Decoder[CompletionsResponse] = Decoder.derived
 }

 case class Choices(
     text: String,
     index: Int,
     logprobs: Option[String],
     finishReason: String
 )

 object Choices {
   given Decoder[Choices] = ConfiguredDecoder.derived(using config)
 }

 case class Usage(promptTokens: Int, completionTokens: Int, totalTokens: Int)

 object Usage {
   given Decoder[Usage] = ConfiguredDecoder.derived(using config)
 }

}

If we compare those two solutions and dig more into how exactly the manual creation of Decoders works, we can understand the mechanism behind automatic creation more clearly.

Working with Scala 2

In Scala 2, Circe provides us with io.circe.generic.extras (which is not yet released for Scala 3), which allows us to use @ConfiguredJsonCodec annotation, which is used to simplify the process of defining JSON encoders and decoders for case classes. It can be used over case classes and sealed traits with provided implicit configuration.

@ConfiguredJsonCodec 
case class CompletionsBody(
   model: String,
   prompt: Option[Prompt] = None,
   suffix: Option[String] = None,
   maxTokens: Option[Int] = None,
   temperature: Option[Double] = None,
   topP: Option[Double] = None,
   n: Option[Int] = None,
   logprobs: Option[Int] = None,
   echo: Option[Boolean] = None,
   stop: Option[Stop] = None,
   presencePenalty: Option[Double] = None,
   frequencyPenalty: Option[Double] = None,
   bestOf: Option[Int] = None,
   logitBias: Option[Map[String, Float]] = None,
   user: Option[String] = None
)

object CompletionsBody {
 implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
}

Now we have to provide a custom implementation of Encoders for Prompt and Stop:

sealed trait Prompt
case class SinglePrompt(value: String) extends Prompt
case class MultiplePrompt(values: Seq[String]) extends Prompt
object Prompt {

 implicit val promptDecoder: Decoder[Prompt] = deriveDecoder[Prompt]
 implicit val encodePrompt: Encoder[Prompt] = {
   case SinglePrompt(value)   => Json.fromString(value)
   case MultiplePrompt(items) => Json.arr(items.map(Json.fromString): _*)
 }
}

sealed trait Stop
case class SingleStop(value: String) extends Stop
case class MultipleStop(values: Seq[String]) extends Stop
object Stop {
 implicit val stopDecoder: Decoder[Stop] = deriveDecoder[Stop]
 implicit val encodePrompt: Encoder[Stop] = {
   case SingleStop(value)   => Json.fromString(value)
   case MultipleStop(items) => Json.arr(items.map(Json.fromString): _*)
 }
}

And we are left with

import io.circe._
import io.circe.generic.extras._
import io.circe.generic.semiauto._

object CompletionsRequestBody {
 @ConfiguredJsonCodec case class CompletionsBody(
     model: String,
     prompt: Option[Prompt] = None,
     suffix: Option[String] = None,
     maxTokens: Option[Int] = None,
     temperature: Option[Double] = None,
     topP: Option[Double] = None,
     n: Option[Int] = None,
     logprobs: Option[Int] = None,
     echo: Option[Boolean] = None,
     stop: Option[Stop] = None,
     presencePenalty: Option[Double] = None,
     frequencyPenalty: Option[Double] = None,
     bestOf: Option[Int] = None,
     logitBias: Option[Map[String, Float]] = None,
     user: Option[String] = None
 )

 object CompletionsBody {
   implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
 }

 sealed trait Prompt
 case class SinglePrompt(value: String) extends Prompt
 case class MultiplePrompt(values: Seq[String]) extends Prompt
 object Prompt {

   implicit val promptDecoder: Decoder[Prompt] = deriveDecoder[Prompt]
   implicit val encodePrompt: Encoder[Prompt] = {
     case SinglePrompt(value)   => Json.fromString(value)
     case MultiplePrompt(items) => Json.arr(items.map(Json.fromString): _*)
   }
 }

 sealed trait Stop
 case class SingleStop(value: String) extends Stop
 case class MultipleStop(values: Seq[String]) extends Stop
 object Stop {
   implicit val stopDecoder: Decoder[Stop] = deriveDecoder[Stop]
   implicit val encodePrompt: Encoder[Stop] = {
     case SingleStop(value)   => Json.fromString(value)
     case MultipleStop(items) => Json.arr(items.map(Json.fromString): _*)
   }
 }
}

In the CompletionsResponse class, we don’t have many ways to deserialize given values, so we can just use @ConfiguredJsonCodec and provide the correct Configuration:

import io.circe.generic.extras._
object CompletionsResponse {
 @ConfiguredJsonCodec case class CompletionsResponse(
     id: String,
     `object`: String,
     created: Int,
     model: String,
     choices: Seq[Choices],
     usage: Usage
 )

 object CompletionsResponse {
   implicit val config: Configuration = Configuration.default.withDefaults
 }

 @ConfiguredJsonCodec case class Choices(
                     text: String,
                     index: Int,
                     logprobs: Option[String],
                     finishReason: String
                   )

 object Choices {
   implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
 }

 @ConfiguredJsonCodec case class Usage(promptTokens: Int, completionTokens: Int, totalTokens: Int)

 object Usage {
   implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
 }

}

In CompletionsResponse, we don’t have any fields that require special Configuration, so we are using the default one. For Choices and Usage, we’re using withSnakeCaseMemerNames configuration, so snake_case keys will deserialize into our camelCase fields.

The request implementation in OpenAI class stays the same, we only have to remember to dropNullValues when we create a request body.

import sttp.client3.HttpClientSyncBackend
import sttp.openai.OpenAI
import sttp.openai.CompletionsRequestBody.CompletionsBody
import sttp.openai.CompletionsRequestBody.MultiplePrompt
import sttp.openai.CompletionsRequestBody.MultipleStop

object Main extends App {
 val backend = DefaultSyncBackend()

 val openAI = new OpenAI("secret-api-key")

 val body = CompletionsBody(
   "text-davinci-003",
   prompt = Some(SinglePrompt("single prompt")),
   stop = Some(MultipleStop(Seq("single stop", "maybe not")))
 )

 val response = openAI.createCompletion(body).send(backend)
 println(response)
}

We check it again

Response(Right(CompletionsResponse(cmpl-7Ee8WOvT72CeETYTOzNZLQj5GAMCV,
text_completion,
1683725232,text-davinci-003,List(Choices(
What are you grateful for?

I'm grateful for the people in,0,None,length)),Usage(2,16,18))),200,,
List(cache-control: no-cache, must-revalidate, cf-ray: 
7c5284aee95f3570-WAW, access-control-allow-origin: *, 
openai-version: 2020-10-01, 
x-ratelimit-remaining-requests: 59,openai-organization: sml-z2a9cc, 
alt-svc: h3=":443"; ma=86400, h3-29=":443"; 
ma=86400, openai-model: text-davinci-003, server: cloudflare, 
x-ratelimit-limit-tokens: 
150000, cf-cache-status: DYNAMIC, content-encoding: gzip, 
x-ratelimit-limit-requests: 60, date: Wed, 10 May 2023 13:27:13 GMT, 
openai-processing-ms: 1181, 
x-ratelimit-remaining-tokens: 149984, content-type: application/json, 
x-request-id: 89a3c893e4c53492da75c9b8adaa6bdf, 
:status: 200, x-ratelimit-reset-tokens: 6ms, x-ratelimit-reset-requests: 1s, 
strict-transport-security: max-age=15724800; includeSubDomains),
List(),RequestMetadata(POST,https://api.openai.com/v1/completions,
Vector(Accept-Encoding: gzip, deflate, Authorization: ***, 
Content-Type: application/json; charset=utf-8)))

And it also works as expected.

Summary

JSON serialization in Scala is somehow an unexpectedly difficult topic, especially not having library documentation ready for the newest version of the language with some functionality still missing (yet to be added). Nevertheless, I hope you find this blog post helpful, and if you haven’t checked how to achieve the same result using uPickle library, I wrote a blog post on that as well.

Blog Comments powered by Disqus.