Dogs Chasing Squirrels

A software development blog

Category Archives: Software Development

Parsing streaming JSON in Typescript

0

In our servers, we will write large volumes of data out to the response stream directly in order to avoid the memory pressure of having all the data in memory at once.

In JavaScript/TypeScript, all though the fetch API supports streaming responses, there is no such thing, in the browser today, as a streaming JSON parser. As an exercise, I wrote a function that will wrap a request in an AsyncGenerator to yield objects of some type T where the response body consists of a JSON array of that type.
Note that the code yields parsed objects in batches. If every parsed object was yielded individually, the “for async” loop would create so many promises that the performance would degrade terribly.

  // Given a response that returns a JSON array of objects of type T, this
  // function returns an async generator that yields batches of objects of type T of a given batch size.
  // Note: If the batch size is too small, the many async calls reduce the performance
  async function* parseStreaming3(response: Response, batchSize: number = 100): AsyncGenerator {
    // If the response has no response body, stop.  This will only happen if something went wrong with the request.
    if (null === response.body) {
      console.warn(`Response has no body.`)
    } else {
      // The JSON object start character, '{'
      const START_OBJECT = 123;
      // The JSON object end character, '}'
      const END_OBJECT = 125;
      // Create a decoder
      const decoder = new TextDecoder('utf-8');
      // Get a streaming reader for the response body
      const reader = response.body.getReader();
      // Keep track of the object depth
      let depth = 0
      // If an object spans two chunks, the previous bytes that represent the end of the previous buffer
      let previousBytes: Uint8Array | undefined = undefined;
      // The start index of the current object
      let startIndex: number | undefined = undefined;
      // The current batch of items
      let batch = new Array()
      // eslint-disable-next-line no-constant-condition
      while (true) {
        // Get the bytes and whether the body is done
        const { done, value: bytes } = await reader.read();
        // If there's no value, stop.
        // If we have values...
        if (undefined !== bytes) {
          // noinspection JSIncompatibleTypesComparison
          // For each byte in the value...
          for (let i = 0; i < bytes.length; i++) {
            // Get the byte
            const byte = bytes[i];
            // If the byte is the start of a JSON object...
            if (START_OBJECT === byte) {
              // Increment the depth
              depth += 1;
              // If the depth is 1, meaning that this is a top-level object, set the start index
              if (1 === depth) {
                startIndex = i;
              }
              // If the byte is the end of an object...
            } else if (END_OBJECT === byte) {
              // If this is a top-level object...
              if (1 === depth) {
                // If there's a previous start index and previous bytes...
                if (undefined !== previousBytes) {
                  try {
                    // Combine the previous bytes with the current bytes
                    const json = decoder.decode(previousBytes)
                      + decoder.decode(bytes.subarray(0, i + 1));
                    // Parse the JSON into an object of the given type
                    const obj: T = JSON.parse(json);
                    // Add the parsed object to the batch
                    batch.push(obj);
                  } catch(e) {
                    console.warn(e)
                    console.log(` - previous json = `, decoder.decode(previousBytes))
                    console.log(` - json = `, decoder.decode(bytes.subarray(0, i + 1)))
                    // Stop
                    return
                  }
                  // Reset the previous bytes
                  previousBytes = undefined;
                  // If there's a start index...
                } else if (undefined !== startIndex) {
                  try {
                    // Get the JSON from the start index to the current index (inclusive)
                    const json = decoder.decode(bytes.subarray(startIndex, i + 1));
                    // Parse the JSON into an object of the given type
                    const obj: T = JSON.parse(json);
                    // Add the parsed object to the batch
                    batch.push(obj);
                    // Un-set the start index
                    startIndex = undefined;
                  } catch(e) {
                    console.warn(e)
                  }
                }
                // If the batch is at the batch size...
                if (batch.length === batchSize) {
                  // Yield the batch
                  yield batch;
                  // Reset the batch
                  batch = new Array()
                }
              }
              // Decrement the depth
              depth -= 1;
            }
          }
          // Because the start index is cleared at the end of each object,
          // if we're ending the loop with a start index, we must not have
          // encountered the end of the object, meaning that the object
          // spans (at least) two reads.
          if (undefined !== startIndex) {
            // If we have no previous bytes...
            if (undefined === previousBytes) {
              // Save the bytes from the start of the object to end of the buffer.
              // We'll combine this json with the next when we encounter the end of the
              // object in the next read.
              previousBytes = bytes.subarray(startIndex);
            } else {
              // There must not have been an end of the object in the previous read,
              // meaning that the read contains some middle section of an object
              // It happens sometimes, if we happen to get a particularly short read.
              // Combine the previous bytes with the current bytes, extending the data.
              const combinedBytes: Uint8Array = new Uint8Array(previousBytes.length + bytes.length);
              combinedBytes.set(previousBytes);
              combinedBytes.set(bytes, previousBytes.length);
              previousBytes = combinedBytes
            }
          }
        }
        // If we're at the end of the response body, stop.  There's no more data to read.
        if (done) {
          break;
        }
      }
      // If items remain in the batch, yield them
      if (batch.length > 0) {
        yield batch;
      }
    }
  }

Hosting a compiled SPA on .net core

0

Let’s imagine we have a single page application (SPA) to handle the client-side logic and a .NET WebAPI application to handle the server side. It’s common to run a compiled single page application (SPA) on something like NodeJS or Vite, and Microsoft has examples of how to create a proxy application that passes SPA-related calls through to it.

Let’s imagine, though, that we want to compile our single page application and have .NET serve up the compiled files statically, without our having to run another server. Here, I’ll demonstrate how to do that. The source code for the demo is on GitHub.

First, let’s create the folders we need. I’m going to call the project “SpaDemo”. Also note that I’m running the following on Linux, so adapt the shell commands as you see fit.

Getting Started

Make the main folder.

$ mkdir SpaDemo

Server Project

In our main folder, create a folder for our .NET server project.

$ cd SpaDemo
$ mkdir server
$ cd server

Create a .NET solution:

$ dotnet new sln -n DemoServer

In it, create a new webapi project and add the project to the solution:

$ dotnet new webapi -n DemoServer
$ dotnet sln DemoServer.sln add DemoServer/DemoServer.csproj

This will create .NET’s default “WeatherForecast” app. Let’s get rid of that stuff and add a new controller that listens to the “/api/demo” route:

using Microsoft.AspNetCore.Mvc;
namespace DemoServer.Controllers; 
[ApiController]
[Route( "/api/[controller]")]
public class DemoController  : ControllerBase {
}

Imagine we want to offer up some .NET configuration data to our SPA client. One way to do this would be to add some values to appsettings.json, e.g.:

{"someName":"SomeValue"}

Then host them with a custom endpoint in our controller:

	private readonly IConfiguration _configuration;

	public DemoController(IConfiguration configuration) {
		this._configuration = configuration;
	}

	[HttpGet]
	[Route("config")]
	public IActionResult GetConfig() {
		// Create a dynamic configuration object
		var config = new {
			SomeName = this._configuration.GetValue<string>( "SomeKey" )
		};
		// Return it as JSON
		return new JsonResult( config );
	}

If we run the application now, we can go to /api/demo/config and see the following:

{"someName":"SomeValue"}

Client Project

I’m going to create the client project in Svelte.

First, go back to our “SpaDemo” folder and create the client application:

$ cd ..
$ npm create vite@latest
✔ Project name: … client
✔ Select a framework: › Svelte
✔ Select a variant: › TypeScript

Scaffolding project in /home/mike/Repos/SpaDemo/client...

Done. Now run:

  cd client
  npm install
  npm run dev

If you do as instructed, you’ll get the “Vite + Svelte” demo page with is counter application.

I want to test routing, so install the svelte-routing library:

$ npm install svelte-routing

added 1 package, and audited 97 packages in 654ms

10 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

Then we’ll modify the application to a very simple 3-page application.

App.svelte:

<script lang="ts">
  import { Router, Route } from 'svelte-routing'
  import Home from "./lib/Home.svelte";
  import Page1 from "./lib/Page1.svelte";
  import Page2 from "./lib/Page2.svelte";
</script>
<main>
    <Router>
        <Route path="/">
            <Home/>
        </Route>
        <Route path="/page1">
            <Page1/>
        </Route>
        <Route path="/page2">
            <Page2/>
        </Route>
    </Router>
</main>

Home.svelte:

<script lang="ts">
    import Links from "./Links.svelte";
</script>
<h1>Home</h1>
<Links/>

Page1.svelte:

<script lang="ts">
    import Links from "./Links.svelte";
</script>
<h1>Page 1</h1>
<Links/>

Page2.svelte:

<script lang="ts">
    import Links from "./Links.svelte";
</script>
<h1>Page 2</h1>
<Links/>

Links.svelte:

<script lang="ts">
    import { Link } from 'svelte-routing'
</script>
<div>
    <Link to="/">Go Home</Link>
    <Link to="/page1">Go to Page 1</Link>
    <Link to="/page2">Go to Page 2</Link>
</div>
<style lang="css">
    div {
        display: flex;
        flex-direction: column;
    }
</style>

Run the application with npm run dev. It will show a Home page with links to Page1 and Page2, which you can then navigate around.

Compile the Client

We want to compile the client to static files and we want to put the result in the server’s wwwroot directory.

First, modify the client’s vite.config.ts file to build to our desired folder:

// https://vitejs.dev/config/
export default defineConfig({
  plugins: [svelte()],
  build: {
    outDir: '../server/DemoServer/wwwroot',
    emptyOutDir: true,
    rollupOptions: {
      output: {
      }
    }
  }
})

Run npm run build to compile:

$ npm run build

> client@0.0.0 build
> vite build

vite v4.4.9 building for production...
✓ 42 modules transformed.
../server/DemoServer/wwwroot/index.html                  0.46 kB │ gzip: 0.29 kB
../server/DemoServer/wwwroot/assets/index-acd3aff5.css   1.09 kB │ gzip: 0.58 kB
../server/DemoServer/wwwroot/assets/index-1625dc16.js   24.76 kB │ gzip: 9.57 kB
✓ built in 560ms

If you navigate to the folder, you’ll see that we have an index.html file in the root and then some CSS and JS files in the assets folder.

Configure the Server

If you run the .NET application now and go to / you’ll get a nice 404. To tell it that it should handle static files, we’ll add “UseStaticFiles”, and to let it know that / should go to /index.html we’ll add “UseDefaultFiles”:

app.UseDefaultFiles();
app.UseStaticFiles();

Restart your .NET application, navigate to / and, voilà, you’ll have a working Svelte application. You should be able to navigate around to /page1 and /page2 and it will all work correctly.

Now try loading /page1 directly in the browser. 404! When we started at /, .NET routed that to index.html and thereafter our SPA was sneakily rewriting the URL to fake /page1 and /page2 endpoints. If we go directly to those URLs, .NET won’t know how to route it. To fix that, we finally add:

app.MapFallbackToFile( "index.html" );

Now when we go to /page1, .NET will happily pass it back to our SPA which will route it correctly.

Conclusion

We now have a compiled Svelte single-page application running as static files behind .NET with routing working as expected. Again, the source code for the demo is on GitHub.

The Last Guy’s Code is Terrible and Needs to be Rewritten

2

It’s a cliché in software development that every developer taking over a project will declare that the last guy’s code is terrible and needs to be rewritten.  Surely the last guy thought his code was perfectly fine.  Who is right?  Both are. 

Maintainability is a quality of code.  An important one, in fact.  The last guy understood his code and what it was doing so it was perfectly maintainable and so was of high quality.  The new guy coming in can’t understand what’s going on and so to him it’s of low quality.  The real question is: How can we make code more maintainable?

The most maintainable code is code that you yourself wrote in the last day or so.  You can remember what you were doing and why.

The next most maintainable code is code that you wrote in the past.  You may not remember what you were doing, but the code is familiar and so you can usually figure it out.

After that, the next most maintainable code is code that somebody else wrote and that person is still around.  You may not understand what it does, but at least you can ask the author.

The least maintainable code is code that somebody else wrote and that person is long gone.  You don’t understand what it does and there’s no way to find out other than to trace through it.

The way to make code more maintainable, then, is to get the least maintainable code – code written by another person long ago – to be as understandable as code that you yourself wrote 10 minutes ago.  We can accomplish this with to things:

  1. Rigorous coding standards
  2. Extensive code commenting

Rigorous coding standards ensures that everyone’s code looks the same.  This includes alignment, code ordering, and code naming strategies.  For example, I ensure all my classes’ properties and methods are declared alphabetically.  My team does the same.  Any one of us can easily find any method or property in anybody else’s code.

Extensive code commenting means commenting nearly every line of code, even if it seems redundant.  If you know what your code is doing, you should be able to write that down in good English.  If you don’t know what your code is doing, you shouldn’t be coding.  This makes reading code like reading a book – rather than trying to decipher a line, you can just read, in English, what it does.  It adds a further validation step, which is that if what the code does and what the comment says the code does are different, this indicates a potential bug. 

Getting developers to write extensive code comments is hard.  Humans are lazy by nature and developers are as bad as any.  Many mediocre developers got into the industry because they saw software development as a way to make a good income while remaining seated and not working too hard.   A good developer, though, will understand the use of this technique and once he or she has experienced how much it helps the maintainability of software, will come to embrace it.

If you’ve achieved your goal, you should not be able to tell whose code is whose even in a large organization and any developer should be able to look at any other developer’s code as if it’s their own.

Svelte HTTPS on localhost

2

There are a few things you need to do to get the Svelte default app to run over HTTPS on localhost.

First, generate a certificate and key. Save these as certificate.crt and certificate.key in the project’s folder.

Next, edit package.json. Change the “start” command to:

"start": "sirv public --no-clear --single --http2 --port 5000 --cert certificate.crt --key certificate.key"

Note that port 5000 is the default, so technically --port 5000 is redundant, but if you were to change it, this is where you would change it. When you run npm run dev, the application will now run on https://localhost:5000. Note, though, that livereload.js will still be running as http and will fail. Here’s how we fix that.

Edit rollup.config.js. Import the node fs command we need (Note: this is a node library so you don’t need any new imports in package.json):

import { readFileSync } from 'fs'

Replace the line:

 !production && livereload('public'),

With:

 !production && livereload({
    watch: 'public',
    port: 5001,
    https: {
        key:  readFileSync( 'certificate.key'),
        cert: readFileSync('certificate.crt'),
    },
}),

Here I’ve set the port to 5001, but if omitted it will default to some other port.

Adaptive firewalls will be the death of me

0

We had an issue today where requests to our Azure App Service were extraordinarily slow. According to our app service metrics, requests were being handled in around 15 milliseconds, however clients were seeing requests take half a minute. Clearly this was something related to the network. Our service is behind an Azure Application Gateway, though nothing in Azure that I could find would show me the end-to-end request time and where the bottleneck was. After doing some testing on my own, I found that my initial requests were instant but then subsequently slowed. This was the tip-off. When you see a request slow over time, it’s an indication that some adaptive firewall is sitting in the middle and, after some initial traffic, has seen something it doesn’t like and has decided to start interfering with the traffic. Hunting around, I found the firewall rule enabling the firewall’s inspection of the body of requests. After disabling that, it’s been smooth sailing.

When I was initially trying to find the source of the problem, I went through Microsoft and Azure’s own troubleshooting guide where it ran checks on my software and made suggestions. Its “documents from the web that might help you” were no help at all.

Shockingly, “Hollywood: Where are they now?” didn’t help me fix my Azure App Service problems.

Rider – The specified task executable “fsc.exe” could not be run.

0

One of my F# projects started throwing this error on build after I uninstalled and reinstalled some other, unrelated .net core versions.

    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003: The specified task executable "fsc.exe" could not be run. System.ComponentModel.Win32Exception (193): The specified executable is not a valid application for this OS platform. [C:\FSharp.fsproj]
    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003:    at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo) [C:\FSharp.fsproj]
    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003:    at System.Diagnostics.Process.Start() [C:\FSharp.fsproj]
    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003:    at Microsoft.Build.Utilities.ToolTask.ExecuteTool(String pathToTool, String responseFileCommands, String commandLineCommands) [C:\FSharp.fsproj]
    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003:    at Microsoft.Build.Utilities.ToolTask.Execute() [C:\FSharp.fsproj]

Googling the error didn’t turn up anything useful. I started a new F# project which compiled without errors, which led me to the solution, which was to delete my failing projects .idea folder. After reopening Rider, problem solved.

Streaming a response in .NET Core WebApi

1

We, as web developers, should try to avoid loading files into memory before returning them via our APIs. Servers are a shared resource and so we’d like to use as little memory as we can. We do this by writing large responses out as a stream.

In the ASP.NET MVC days, I would use PushStreamContent to stream data out in a Web API. That doesn’t seem to exist in .NET core and, even if it did, we don’t need it anyway. There’s an easy way to get direct access to the output stream and that’s just with the controller’s this.Response.Body, which is a Stream.

In this sample, I just grab a file out of my downloads folder and stream it back out:

[HttpGet]
[Route( "streaming" )]
public async Task GetStreaming() {
    const string filePath = @"C:\Users\mike\Downloads\dotnet-sdk-3.1.201-win-x64.exe";
    this.Response.StatusCode = 200;
    this.Response.Headers.Add( HeaderNames.ContentDisposition, $"attachment; filename=\"{Path.GetFileName( filePath )}\"" );
    this.Response.Headers.Add( HeaderNames.ContentType, "application/octet-stream"  );
    var inputStream = new FileStream( filePath, FileMode.Open, FileAccess.Read );
    var outputStream = this.Response.Body;
    const int bufferSize = 1 << 10;
    var buffer = new byte[bufferSize];
    while ( true ) {
        var bytesRead = await inputStream.ReadAsync( buffer, 0, bufferSize );
        if ( bytesRead == 0 ) break;
        await outputStream.WriteAsync( buffer, 0, bytesRead );
    }
    await outputStream.FlushAsync();
}

This does the same thing in F#:

[<HttpGet>]
[<Route("streaming")>]
member __.GetStreaming() = async {
    let filePath = @"C:\Users\mike\Downloads\dotnet-sdk-3.1.201-win-x64.exe"
    __.Response.StatusCode <- 200
    __.Response.Headers.Add( HeaderNames.ContentDisposition, StringValues( sprintf "attachment; filename=\"%s\"" ( System.IO.Path.GetFileName( filePath ) ) ) )
    __.Response.Headers.Add( HeaderNames.ContentType, StringValues( "application/octet-stream" ) )
    let inputStream = new FileStream( filePath, FileMode.Open, FileAccess.Read )
    let outputStream = __.Response.Body
    let bufferSize = 1 <<< 10
    let buffer = Array.zeroCreate<byte> bufferSize
    let mutable loop = true
    while loop do
        let! bytesRead = inputStream.ReadAsync( buffer, 0, bufferSize ) |> Async.AwaitTask
        match bytesRead with
        | 0 -> loop <- false
        | _ -> do! outputStream.WriteAsync( buffer, 0, bytesRead ) |> Async.AwaitTask
    do! outputStream.FlushAsync() |> Async.AwaitTask
    return EmptyResult()
}

A couple of important notes:
1. By default, you have to write to the stream using the Async methods. If you try to write with non-Async methods, you’ll get the error “Synchronous operations are disallowed. Call WriteAsync or set AllowSynchronousIO to true instead.” and, as the error says, you’ll have to enable the AllowSynchronousIO setting.
2. On C# you can have your streaming controller method return nothing at all. If you try the same on F#, you’ll get the error, midway through the response, “StatusCode cannot be set because the response has already started”. The solution to this is to have the method return an EmptyResult().

Railway-Oriented Programming in F# and WebAPI

0

If you’re interested in learning about using railway-oriented programming in F#, you should be reading Railway oriented programming at fsharpforfunandprofit.com and The Marvels of Monads at microsoft.com. Stylish F# by Kit Eason has also been a help.

Functional programming is about functional composition and pipelineing functions from one to the next to get a result. Railway-oriented programming is about changing that pipeline to a track where if an operation succeeds, it goes forwards and if it fails, it cuts over to a failure track. F# already has built-in the Result object, a discriminated union giving success (Ok) or failure (Error) and the monadic functions, bind, map, and mapError.

I was interested in how these could be applied to a WebAPI endpoint. Let’s say we’re passing these results along a pipeline. What’s failure? In the end, it will be an IActionResult of some kind, probably a StatusCodeResult. 404, 401, 500, whatever. In the end, that’s an IActionResult, too, though with a status code of 200.

There’s an F#, functional web project that already does something like this, Suave.io, though it doesn’t look to be maintained anymore. Even so, they have some good, async implementations of the various railway/monadic functions like bind, compose, etc. I’ve tried to adapt them in to my own AsyncResult module.

Result

The result object is unchanged:

type Result<'TSuccess,'TFailure> =
    | Ok of 'TSuccess
    | Error of 'TFailure

Bind

First, bind, which takes some Result input and a function and, if the input is Ok, calls the function with the contents, and, if the input is Error, short-cuts and returns the error.
An async bind looks like this:

let bind f x = async {
    let! x' = x
    match x' with
    | Error e -> return Error e
    | Ok x'' -> return! f x''
}

Map

Next we have map. Say we have a function that takes some object and manipulates it returning another object. Map lets us insert that into our railway, with the function operating on the contents of the Ok result.
An async map looks like this:

let map f x = async {
    let! x' = x
    match x' with
    | Error e -> return Error e
    | Ok x'' ->
        let! r = f x''
        return Ok( r )

MapError

MapError is like map but instead we expect the function to operate on the Error result.
This is my async mapError:

let mapError f x = async {
    let! x' = x
    match x' with
    | Error e ->
        let! r = f e
        return Error( r )
    | Ok ok ->
        return Ok ok
}

Compose

Next we have compose, which lets us pipe two bound functions together. If the first function returns an Ok(x) as output, the second function takes the x as input and returns some Result. If the first function returns an Error, the second is never called.
This is the async compose:

let compose f1 f2 =
    fun x -> bind f2 (f1 x)

Custom Operators

We can create a few custom operators for our functions:

// bind operator
let (>>=) a b =
    bind b a

// compose operator
let (>=>) a b =
    compose a b

An Example WebAPI Controller

Let’s imagine a WebAPI controller endpoint that implements GET /thing/{id} where we return some Thing with the given ID. Normally we would:
* Check that the user has permission to get the thing.
* Get the thing from the database.
* Format it into JSON.
* Return it.
If the user doesn’t have permissions, we should get a 401 Unauthorized. If the Thing with the given ID isn’t found, we should get a 404 Not Found.

The functions making up our railway

Usually we want a connection to the database but I’m just going to fake it for this example:

let openConnection(): Async<IDbConnection> =
    async {
        return null
    }

We might also have a function that, given the identity in the HttpContext and a database connection could fetch the user’s roles. Again we’ll fake it. For testing purposes, we’ll say the user is an admin unless the thing ID ends in 99

let getRole ( connection : IDbConnection ) ( context : HttpContext ) =
    async {
        if context.Request.Path.Value.EndsWith("99") then return Ok "user"
        else return Ok "admin"
    }

Now we come to our first railway component. We want to check the user has the given role. If he does, we return Ok, if not an Error with the 401 Unauthorized code (not yet a StatusCodeResult)

let ensureUserHasRole requiredRole userRole =
    async {
        if userRole = requiredRole then return Ok()
        else return Error( HttpStatusCode.Unauthorized )
    }

Next we have a railway component that fetches the thing by ID. For testing purposes, we’ll say that if the ID is 0 we’ll return an Option.None and otherwise return an Option.Some. Although I haven’t added it here, I could imagine adding a try/catch that returns an Error 500 Internal Server Error when an exception is caught.

let fetchThingById (connection: IDbConnection) (thingId: int) () =
    async {
        match thingId with
        | 0 ->
            // Pretend we couldn't find it.
            return Ok( None )
        | _ ->
            // Pretend we got this from the DB
            return Ok( Some ( { Id = thingId; Name = "test" } ) )
    }

Our next railway component checks that a given object is found. If it’s Some, it returns Ok with the result. If it’s None, we get an Error, 404 Not Found.

let ensureFound ( value : 'a option ) = async {
    match value with
    | Some value' -> return Ok( value' )
    | None -> return Error( HttpStatusCode.NotFound )
}

Next we’ll create a function that just converts a value to a JSON result (maybe pretending there might be more complicated formatting going on here):

let toJsonResult ( value : 'a ) =
    async {
        return ( JsonResult( value ):> IActionResult )
    }    

Finally, we’ll add a function to convert that HttpStatusCode to a StatusCodeResult (also overkill – we could probably inline it):

let statusCodeToErrorResult ( code : HttpStatusCode ) = async {
    return ( StatusCodeResult( (int)code ) :> IActionResult )
}

When we end up, we’re going to have an Ok result of type IActionResult and an Error, also of type IActionResult. I want to coalesce the two into whatever the result is, regardless of whether it’s Ok or Error:

// If Error and OK are of the same type, returns the enclosed value.
let coalesce r = async {
    let! r' = r
    match r' with
    | Error e -> return e
    | Ok ok -> return ok
}

Putting it together

Here’s our railway in action:

// GET /thing/{thingId}
let getThing (thingId: int) (context: HttpContext): Async<IActionResult> =
    async {
        // Create a DB connection
        let! connection = openConnection()
        // Get the result
        let! result =
            // Starting with the context...
            context |> (
                // Get the user's role
                ( getRole connection )
                // Ensure the user is an admin.  
                >=> ( ensureUserHasRole "admin" )
                // Fetch the thing by ID
                >=> ( fetchThingById connection thingId ) 
                // Ensure if was found
                >=> ensureFound
                // Convert it to JSON
                >> ( map toJsonResult )
                // Map the error HttpStatusCode to an error StatusCodeResult
                >> ( mapError statusCodeToErrorResult )
                // Coalese the OK and Error into one IAction result
                >> coalesce
            )
        // Return the result
        return result
}

To summarize, we
* Get the user’s role, resulting in an Ok with the role (and no Error, though I could imagine catching an exception and returning a 500).
* See if the user has the role we need resulting in an Ok with no content or an Error(401).
* Fetch a Thing from the database, resulting in an Object.Some or Object.None.
* Check that it’s not None, returning an Error(404) if it is or an Ok(Thing).
* Mapping the Ok(Thing) into a Thing and turning the Thing into a JsonResult.
* or mapping the Error(HttpStatusCode) into a HttpStatusCode and turning the Error(HttpStatusCode) into a StatusCodeResult.
* Taking whichever result we ended up with, the JsonResult or StatusCodeResult and returning it.

If we run the website and call https://localhost:5001/thing/1 we get the JSON for our Thing.

{"id":1,"name":"test"}

If we call /thing/0 we get 404 Not Found. If we call thing/99 we get 401 Unauthorized.

There’s room here for some other methods. I could imagine wanting to wrap a call in a try/catch and return a 500 Server Error if it fails, for example.

The best part is that it’s a nice, readable, railway of functions. And our custom operators make it look good.

The code for this post can be found on GitHub.

Signature validation failed. Unable to match ‘kid’

0

I came across this error trying to validate tokens between a React app and an Okta developer page and Stack Overflow was giving me nothing.

On the client side, I was using the oidc-client.js, like so:

  const oidcConfiguration: any = {
    authority: env.__AUTH_AUTHORITY__,
    redirect_uri: env.__AUTH_REDIRECT_URI__,
    post_logout_redirect_uri: env.__AUTH_POST_LOGOUT_REDIRECT_URI__,
    silent_redirect_uri: env.__AUTH_SILENT_RENEW_URI__,
    client_id: env.__AUTH_CLIENT_ID__,
    response_type: 'id_token token',
    scope: 'openid profile email',
  }

The authority was my dev account, something like https://dev-123456.okta.com. I got this working and managed to get myself an access token.

On the .net core side, I was using basic JWT validation. I had been using Okta’s example but they amount to the same thing.

// Add JWT Bearer authentication
services.AddAuthentication( sharedOptions => {
      sharedOptions.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
      sharedOptions.DefaultChallengeScheme    = JwtBearerDefaults.AuthenticationScheme;
   } )
   .AddJwtBearer( options => {
      options.Authority = this._configuration["Auth:Authority"];
      options.Audience  = this._configuration["Auth:Audience"];
   } );

Using the same authority as getting the token, I got an error message that told me I needed to set up an authorization server. I set one up that was, by default, something like https://dev-123456.okta.com/oauth2/default. When I tried to authenticate my token against this server, I got the error “Signature validation failed. Unable to match ‘kid’”.

The solution turned out to be pretty simple: The client has to be changed to also use https://dev-123456.okta.com/oauth2/default to get the token instead of just https://dev-123456.okta.com.

Signing MSI Installers with a Code Signing Certificate

3

Signing installer MSIs with a code signing certificate prevents Windows from showing a big red “This application is untrusted!” warning when an installer is launched.

I recently had to set up code signing with a certificate we got from GoDaddy and it’s a little convoluted so I’ll document it here.

Creating and using a code signing certificate involves three passwords which we’ll call
* REQUEST_PASSWORD
* EXPORT_PASSWORD
* SIGNING_PASSWORD

Getting a Code Signing Certificate

We get our code signing certificates from GoDaddy.

Generating a Certificate Request

For this we’ll need our REQUEST_PASSWORD.
Following the instructions here we’ll end up with the files
* request.csr
* request.pfx

The pfx file has our private key embedded in it. These files need to be submitted to GoDaddy.

When the request is processed, GoDaddy will send us certificate files. These are randomly named, something like:
* SOMERANDOM-SHA2.pem
* SOMERANDOM-SHA2.spc

Extract the Private Key from the Certificate Request

We need the private key in the certificate request as a .key file. To do this we need to install OpenSSL. It can be installed as part of cygwin.

Generate the key via (where $ is the cygwin bash prompt):

$ openssl pkcs12 -in request.pfx -nocerts -out request.key.pem -nodes
Enter Import Password: REQUEST_PASSWORD

The key will be in request.key.pem

Create a PVK File

Next we need to create a PVK file. For this we need pvk.exe.

Run:

PS C:\tmp\ssl> .\pvk.exe -in .\request.key.pem -topvk -strong -out cert.key.pvk
Enter Password: EXPORT_PASSWORD
Verifying - Enter Password: EXPORT_PASSWORD

This generates cert.key.pvk

Combined the PVK and SPC into a PFX

Installers are signed with a PFX file which is a combination of the key and certificate. For this we need pvk2pfx.exe.

Run:

pvk2pfx.exe -pvk cert.key.pvk -pi EXPORT_PASSWORD -spc SOMERANDOM-SHA2.spc -pfx codesign.pfx -po SIGNING_PASSWORD -f

This generates codesign.pfx. This, along with SIGNING_PASSWORD is what we need to sign the MSI. When the code signing certificate expires we’ll need to repeat the steps above.

Signing the Installer

Once we have the PFX and the signing password, we can sign the installer. For this we need signtool.exe.

The command to sign the installer is:

.\signtool.exe sign /f .\codesign.pfx /p SIGNING_PASSWORD /d "(some description)" /tr http://timestamp.digicert.com /v "PATH_TO_MSI"

There are other timeservers you can use.