Dogs Chasing Squirrels

A software development blog

About Mike Bennett

Svelte HTTPS on localhost – vite edition


In a previous post, I showed how to enable HTTP on Svelte, back before Svelte had switched over to Vite. Now, on Vite:

First, generate a certificate and key. Save these as certificate.crt and certificate.key in the project’s folder.

Next, edit vite.config.ts. Add the following to the top:

import fs from 'fs';

Then, in the server configuration:

    server: {
        https: {
            key: fs.readFileSync('certificate.key'),
            cert: fs.readFileSync('certificate.crt'),

The Last Guy’s Code is Terrible and Needs to be Rewritten


It’s a cliché in software development that every developer taking over a project will declare that the last guy’s code is terrible and needs to be rewritten.  Surely the last guy thought his code was perfectly fine.  Who is right?  Both are. 

Maintainability is a quality of code.  An important one, in fact.  The last guy understood his code and what it was doing so it was perfectly maintainable and so was of high quality.  The new guy coming in can’t understand what’s going on and so to him it’s of low quality.  The real question is: How can we make code more maintainable?

The most maintainable code is code that you yourself wrote in the last day or so.  You can remember what you were doing and why.

The next most maintainable code is code that you wrote in the past.  You may not remember what you were doing, but the code is familiar and so you can usually figure it out.

After that, the next most maintainable code is code that somebody else wrote and that person is still around.  You may not understand what it does, but at least you can ask the author.

The least maintainable code is code that somebody else wrote and that person is long gone.  You don’t understand what it does and there’s no way to find out other than to trace through it.

The way to make code more maintainable, then, is to get the least maintainable code – code written by another person long ago – to be as understandable as code that you yourself wrote 10 minutes ago.  We can accomplish this with to things:

  1. Rigorous coding standards
  2. Extensive code commenting

Rigorous coding standards ensures that everyone’s code looks the same.  This includes alignment, code ordering, and code naming strategies.  For example, I ensure all my classes’ properties and methods are declared alphabetically.  My team does the same.  Any one of us can easily find any method or property in anybody else’s code.

Extensive code commenting means commenting nearly every line of code, even if it seems redundant.  If you know what your code is doing, you should be able to write that down in good English.  If you don’t know what your code is doing, you shouldn’t be coding.  This makes reading code like reading a book – rather than trying to decipher a line, you can just read, in English, what it does.  It adds a further validation step, which is that if what the code does and what the comment says the code does are different, this indicates a potential bug. 

Getting developers to write extensive code comments is hard.  Humans are lazy by nature and developers are as bad as any.  Many mediocre developers got into the industry because they saw software development as a way to make a good income while remaining seated and not working too hard.   A good developer, though, will understand the use of this technique and once he or she has experienced how much it helps the maintainability of software, will come to embrace it.

If you’ve achieved your goal, you should not be able to tell whose code is whose even in a large organization and any developer should be able to look at any other developer’s code as if it’s their own.

Svelte HTTPS on localhost


There are a few things you need to do to get the Svelte default app to run over HTTPS on localhost.

First, generate a certificate and key. Save these as certificate.crt and certificate.key in the project’s folder.

Next, edit package.json. Change the “start” command to:

"start": "sirv public --no-clear --single --http2 --port 5000 --cert certificate.crt --key certificate.key"

Note that port 5000 is the default, so technically --port 5000 is redundant, but if you were to change it, this is where you would change it. When you run npm run dev, the application will now run on https://localhost:5000. Note, though, that livereload.js will still be running as http and will fail. Here’s how we fix that.

Edit rollup.config.js. Import the node fs command we need (Note: this is a node library so you don’t need any new imports in package.json):

import { readFileSync } from 'fs'

Replace the line:

 !production && livereload('public'),


 !production && livereload({
    watch: 'public',
    port: 5001,
    https: {
        key:  readFileSync( 'certificate.key'),
        cert: readFileSync('certificate.crt'),

Here I’ve set the port to 5001, but if omitted it will default to some other port.

Adaptive firewalls will be the death of me


We had an issue today where requests to our Azure App Service were extraordinarily slow. According to our app service metrics, requests were being handled in around 15 milliseconds, however clients were seeing requests take half a minute. Clearly this was something related to the network. Our service is behind an Azure Application Gateway, though nothing in Azure that I could find would show me the end-to-end request time and where the bottleneck was. After doing some testing on my own, I found that my initial requests were instant but then subsequently slowed. This was the tip-off. When you see a request slow over time, it’s an indication that some adaptive firewall is sitting in the middle and, after some initial traffic, has seen something it doesn’t like and has decided to start interfering with the traffic. Hunting around, I found the firewall rule enabling the firewall’s inspection of the body of requests. After disabling that, it’s been smooth sailing.

When I was initially trying to find the source of the problem, I went through Microsoft and Azure’s own troubleshooting guide where it ran checks on my software and made suggestions. Its “documents from the web that might help you” were no help at all.

Shockingly, “Hollywood: Where are they now?” didn’t help me fix my Azure App Service problems.

Rider – The specified task executable “fsc.exe” could not be run.


One of my F# projects started throwing this error on build after I uninstalled and reinstalled some other, unrelated .net core versions.

    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003: The specified task executable "fsc.exe" could not be run. System.ComponentModel.Win32Exception (193): The specified executable is not a valid application for this OS platform. [C:\FSharp.fsproj]
    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003:    at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo) [C:\FSharp.fsproj]
    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003:    at System.Diagnostics.Process.Start() [C:\FSharp.fsproj]
    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003:    at Microsoft.Build.Utilities.ToolTask.ExecuteTool(String pathToTool, String responseFileCommands, String commandLineCommands) [C:\FSharp.fsproj]
    C:\Program Files\dotnet\sdk\5.0.103\FSharp\Microsoft.FSharp.Targets(281,9): error MSB6003:    at Microsoft.Build.Utilities.ToolTask.Execute() [C:\FSharp.fsproj]

Googling the error didn’t turn up anything useful. I started a new F# project which compiled without errors, which led me to the solution, which was to delete my failing projects .idea folder. After reopening Rider, problem solved.

Streaming a response in .NET Core WebApi


We, as web developers, should try to avoid loading files into memory before returning them via our APIs. Servers are a shared resource and so we’d like to use as little memory as we can. We do this by writing large responses out as a stream.

In the ASP.NET MVC days, I would use PushStreamContent to stream data out in a Web API. That doesn’t seem to exist in .NET core and, even if it did, we don’t need it anyway. There’s an easy way to get direct access to the output stream and that’s just with the controller’s this.Response.Body, which is a Stream.

In this sample, I just grab a file out of my downloads folder and stream it back out:

[Route( "streaming" )]
public async Task GetStreaming() {
    const string filePath = @"C:\Users\mike\Downloads\dotnet-sdk-3.1.201-win-x64.exe";
    this.Response.StatusCode = 200;
    this.Response.Headers.Add( HeaderNames.ContentDisposition, $"attachment; filename=\"{Path.GetFileName( filePath )}\"" );
    this.Response.Headers.Add( HeaderNames.ContentType, "application/octet-stream"  );
    var inputStream = new FileStream( filePath, FileMode.Open, FileAccess.Read );
    var outputStream = this.Response.Body;
    const int bufferSize = 1 << 10;
    var buffer = new byte[bufferSize];
    while ( true ) {
        var bytesRead = await inputStream.ReadAsync( buffer, 0, bufferSize );
        if ( bytesRead == 0 ) break;
        await outputStream.WriteAsync( buffer, 0, bytesRead );
    await outputStream.FlushAsync();

This does the same thing in F#:

member __.GetStreaming() = async {
    let filePath = @"C:\Users\mike\Downloads\dotnet-sdk-3.1.201-win-x64.exe"
    __.Response.StatusCode <- 200
    __.Response.Headers.Add( HeaderNames.ContentDisposition, StringValues( sprintf "attachment; filename=\"%s\"" ( System.IO.Path.GetFileName( filePath ) ) ) )
    __.Response.Headers.Add( HeaderNames.ContentType, StringValues( "application/octet-stream" ) )
    let inputStream = new FileStream( filePath, FileMode.Open, FileAccess.Read )
    let outputStream = __.Response.Body
    let bufferSize = 1 <<< 10
    let buffer = Array.zeroCreate<byte> bufferSize
    let mutable loop = true
    while loop do
        let! bytesRead = inputStream.ReadAsync( buffer, 0, bufferSize ) |> Async.AwaitTask
        match bytesRead with
        | 0 -> loop <- false
        | _ -> do! outputStream.WriteAsync( buffer, 0, bytesRead ) |> Async.AwaitTask
    do! outputStream.FlushAsync() |> Async.AwaitTask
    return EmptyResult()

A couple of important notes:
1. By default, you have to write to the stream using the Async methods. If you try to write with non-Async methods, you’ll get the error “Synchronous operations are disallowed. Call WriteAsync or set AllowSynchronousIO to true instead.” and, as the error says, you’ll have to enable the AllowSynchronousIO setting.
2. On C# you can have your streaming controller method return nothing at all. If you try the same on F#, you’ll get the error, midway through the response, “StatusCode cannot be set because the response has already started”. The solution to this is to have the method return an EmptyResult().

Starting and Stopping Azure Web Apps with a DevOps pipeline


We’re using Azure web services for our dev and test environments. Since we only use these environments during the day, I wanted to write some automated functions to turn them off at night and turn them back on in the morning. It can be done with Azure Functions, but I thought it would be easier to build into an Azure Pipeline where developers are all doing their work anyway, so if someone needed to work after hours they would know how to get at it.

First is the PowerShell script to start or stop a service, based on this post.

# This controls an app service.
    # The tenant ID
    [Parameter(Mandatory=$true)][string] $TenantId,
    # The subscription ID
    [Parameter(Mandatory=$true)] [string] $SubscriptionId,
    # The App Registration client ID
    [Parameter(Mandatory=$true)][string] $ClientId,
    # The App Registration client secret
    [Parameter(Mandatory=$true)][string] $ClientSecret,
    # The resource group of the service to control
    [Parameter(Mandatory=$true)][string] $ResourceGroup,
    # The name of the service to control
    [Parameter(Mandatory=$true)][string] $AppService,
    # The switch if we want to start
    [switch] $Start = $false,
    # The switch if we want to start
    [switch] $Stop = $false

$StartOrStop = ""
if ( $Start ) {
    $StartOrStop = "start"
if ( $Stop ) {
    $StartOrStop = "stop"

# Get the authentication token
$Auth = Invoke-RestMethod `
 -Uri "$TenantId/oauth2/token?api-version=1.0" `
 -Method Post `
 -Body @{"grant_type" = "client_credentials"; `
 "resource" = ""; `
 "client_id" = "$ClientId"; `
 "client_secret" = "$ClientSecret"}

$HeaderValue = "Bearer " + $Auth.access_token

# Control the service
Write-Output "Executing $AppService $StartOrStop..."
Invoke-RestMethod `
-Uri "$SubscriptionId/resourceGroups/$ResourceGroup/providers/Microsoft.Web/sites/$AppService/$StartOrStop`?api-version=2018-02-01" `
-Method Post `
-Headers @{Authorization = $HeaderValue}   
Write-Output "Done $Command $AppService"

Then there's the pipeline yaml to run the script. This is the "Start" version. I'll leave "Stop" as an exercise for the reader.

name: $(Date:yyyyMMdd)-$(Rev:r)
  - job: "Build"
      vmImage: 'windows-2019'
      # Note: clientSecret set in Azure Pipeline
      resourceGroup: 'MY_RESOURCE_GROUP'
      appService: 'MY_APP_SERVICE'
      - task: PowerShell@2
        displayName: 'Start $(appService)'
          errorActionPreference: Stop
          targetType: filePath
          filePath: '$(Build.SourcesDirectory)\ControlAppService.ps1'
          arguments: '-Start -AppService $(appService) -TenantId $(tenantId) -SubscriptionId $(subscriptionId) -ClientId $(clientId) -ClientSecret $(clientSecret) -ResourceGroup $(resourceGroup)'

I have the clientSecret set up in the Azure Pipeline’s variables as a secret value.

That’s pretty much it. The pipelines are set to run on a timer: Stop at 7pm; Start at 7am.

Railway-Oriented Programming in F# and WebAPI


If you’re interested in learning about using railway-oriented programming in F#, you should be reading Railway oriented programming at and The Marvels of Monads at Stylish F# by Kit Eason has also been a help.

Functional programming is about functional composition and pipelineing functions from one to the next to get a result. Railway-oriented programming is about changing that pipeline to a track where if an operation succeeds, it goes forwards and if it fails, it cuts over to a failure track. F# already has built-in the Result object, a discriminated union giving success (Ok) or failure (Error) and the monadic functions, bind, map, and mapError.

I was interested in how these could be applied to a WebAPI endpoint. Let’s say we’re passing these results along a pipeline. What’s failure? In the end, it will be an IActionResult of some kind, probably a StatusCodeResult. 404, 401, 500, whatever. In the end, that’s an IActionResult, too, though with a status code of 200.

There’s an F#, functional web project that already does something like this,, though it doesn’t look to be maintained anymore. Even so, they have some good, async implementations of the various railway/monadic functions like bind, compose, etc. I’ve tried to adapt them in to my own AsyncResult module.


The result object is unchanged:

type Result<'TSuccess,'TFailure> =
    | Ok of 'TSuccess
    | Error of 'TFailure


First, bind, which takes some Result input and a function and, if the input is Ok, calls the function with the contents, and, if the input is Error, short-cuts and returns the error.
An async bind looks like this:

let bind f x = async {
    let! x' = x
    match x' with
    | Error e -> return Error e
    | Ok x'' -> return! f x''


Next we have map. Say we have a function that takes some object and manipulates it returning another object. Map lets us insert that into our railway, with the function operating on the contents of the Ok result.
An async map looks like this:

let map f x = async {
    let! x' = x
    match x' with
    | Error e -> return Error e
    | Ok x'' ->
        let! r = f x''
        return Ok( r )


MapError is like map but instead we expect the function to operate on the Error result.
This is my async mapError:

let mapError f x = async {
    let! x' = x
    match x' with
    | Error e ->
        let! r = f e
        return Error( r )
    | Ok ok ->
        return Ok ok


Next we have compose, which lets us pipe two bound functions together. If the first function returns an Ok(x) as output, the second function takes the x as input and returns some Result. If the first function returns an Error, the second is never called.
This is the async compose:

let compose f1 f2 =
    fun x -> bind f2 (f1 x)

Custom Operators

We can create a few custom operators for our functions:

// bind operator
let (>>=) a b =
    bind b a

// compose operator
let (>=>) a b =
    compose a b

An Example WebAPI Controller

Let’s imagine a WebAPI controller endpoint that implements GET /thing/{id} where we return some Thing with the given ID. Normally we would:
* Check that the user has permission to get the thing.
* Get the thing from the database.
* Format it into JSON.
* Return it.
If the user doesn’t have permissions, we should get a 401 Unauthorized. If the Thing with the given ID isn’t found, we should get a 404 Not Found.

The functions making up our railway

Usually we want a connection to the database but I’m just going to fake it for this example:

let openConnection(): Async<IDbConnection> =
    async {
        return null

We might also have a function that, given the identity in the HttpContext and a database connection could fetch the user’s roles. Again we’ll fake it. For testing purposes, we’ll say the user is an admin unless the thing ID ends in 99

let getRole ( connection : IDbConnection ) ( context : HttpContext ) =
    async {
        if context.Request.Path.Value.EndsWith("99") then return Ok "user"
        else return Ok "admin"

Now we come to our first railway component. We want to check the user has the given role. If he does, we return Ok, if not an Error with the 401 Unauthorized code (not yet a StatusCodeResult)

let ensureUserHasRole requiredRole userRole =
    async {
        if userRole = requiredRole then return Ok()
        else return Error( HttpStatusCode.Unauthorized )

Next we have a railway component that fetches the thing by ID. For testing purposes, we’ll say that if the ID is 0 we’ll return an Option.None and otherwise return an Option.Some. Although I haven’t added it here, I could imagine adding a try/catch that returns an Error 500 Internal Server Error when an exception is caught.

let fetchThingById (connection: IDbConnection) (thingId: int) () =
    async {
        match thingId with
        | 0 ->
            // Pretend we couldn't find it.
            return Ok( None )
        | _ ->
            // Pretend we got this from the DB
            return Ok( Some ( { Id = thingId; Name = "test" } ) )

Our next railway component checks that a given object is found. If it’s Some, it returns Ok with the result. If it’s None, we get an Error, 404 Not Found.

let ensureFound ( value : 'a option ) = async {
    match value with
    | Some value' -> return Ok( value' )
    | None -> return Error( HttpStatusCode.NotFound )

Next we’ll create a function that just converts a value to a JSON result (maybe pretending there might be more complicated formatting going on here):

let toJsonResult ( value : 'a ) =
    async {
        return ( JsonResult( value ):> IActionResult )

Finally, we’ll add a function to convert that HttpStatusCode to a StatusCodeResult (also overkill – we could probably inline it):

let statusCodeToErrorResult ( code : HttpStatusCode ) = async {
    return ( StatusCodeResult( (int)code ) :> IActionResult )

When we end up, we’re going to have an Ok result of type IActionResult and an Error, also of type IActionResult. I want to coalesce the two into whatever the result is, regardless of whether it’s Ok or Error:

// If Error and OK are of the same type, returns the enclosed value.
let coalesce r = async {
    let! r' = r
    match r' with
    | Error e -> return e
    | Ok ok -> return ok

Putting it together

Here’s our railway in action:

// GET /thing/{thingId}
let getThing (thingId: int) (context: HttpContext): Async<IActionResult> =
    async {
        // Create a DB connection
        let! connection = openConnection()
        // Get the result
        let! result =
            // Starting with the context...
            context |> (
                // Get the user's role
                ( getRole connection )
                // Ensure the user is an admin.  
                >=> ( ensureUserHasRole "admin" )
                // Fetch the thing by ID
                >=> ( fetchThingById connection thingId ) 
                // Ensure if was found
                >=> ensureFound
                // Convert it to JSON
                >> ( map toJsonResult )
                // Map the error HttpStatusCode to an error StatusCodeResult
                >> ( mapError statusCodeToErrorResult )
                // Coalese the OK and Error into one IAction result
                >> coalesce
        // Return the result
        return result

To summarize, we
* Get the user’s role, resulting in an Ok with the role (and no Error, though I could imagine catching an exception and returning a 500).
* See if the user has the role we need resulting in an Ok with no content or an Error(401).
* Fetch a Thing from the database, resulting in an Object.Some or Object.None.
* Check that it’s not None, returning an Error(404) if it is or an Ok(Thing).
* Mapping the Ok(Thing) into a Thing and turning the Thing into a JsonResult.
* or mapping the Error(HttpStatusCode) into a HttpStatusCode and turning the Error(HttpStatusCode) into a StatusCodeResult.
* Taking whichever result we ended up with, the JsonResult or StatusCodeResult and returning it.

If we run the website and call https://localhost:5001/thing/1 we get the JSON for our Thing.


If we call /thing/0 we get 404 Not Found. If we call thing/99 we get 401 Unauthorized.

There’s room here for some other methods. I could imagine wanting to wrap a call in a try/catch and return a 500 Server Error if it fails, for example.

The best part is that it’s a nice, readable, railway of functions. And our custom operators make it look good.

The code for this post can be found on GitHub.

Signature validation failed. Unable to match ‘kid’


I came across this error trying to validate tokens between a React app and an Okta developer page and Stack Overflow was giving me nothing.

On the client side, I was using the oidc-client.js, like so:

  const oidcConfiguration: any = {
    authority: env.__AUTH_AUTHORITY__,
    redirect_uri: env.__AUTH_REDIRECT_URI__,
    post_logout_redirect_uri: env.__AUTH_POST_LOGOUT_REDIRECT_URI__,
    silent_redirect_uri: env.__AUTH_SILENT_RENEW_URI__,
    client_id: env.__AUTH_CLIENT_ID__,
    response_type: 'id_token token',
    scope: 'openid profile email',

The authority was my dev account, something like I got this working and managed to get myself an access token.

On the .net core side, I was using basic JWT validation. I had been using Okta’s example but they amount to the same thing.

// Add JWT Bearer authentication
services.AddAuthentication( sharedOptions => {
      sharedOptions.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
      sharedOptions.DefaultChallengeScheme    = JwtBearerDefaults.AuthenticationScheme;
   } )
   .AddJwtBearer( options => {
      options.Authority = this._configuration["Auth:Authority"];
      options.Audience  = this._configuration["Auth:Audience"];
   } );

Using the same authority as getting the token, I got an error message that told me I needed to set up an authorization server. I set one up that was, by default, something like When I tried to authenticate my token against this server, I got the error “Signature validation failed. Unable to match ‘kid’”.

The solution turned out to be pretty simple: The client has to be changed to also use to get the token instead of just

Signing MSI Installers with a Code Signing Certificate


Signing installer MSIs with a code signing certificate prevents Windows from showing a big red “This application is untrusted!” warning when an installer is launched.

I recently had to set up code signing with a certificate we got from GoDaddy and it’s a little convoluted so I’ll document it here.

Creating and using a code signing certificate involves three passwords which we’ll call

Getting a Code Signing Certificate

We get our code signing certificates from GoDaddy.

Generating a Certificate Request

For this we’ll need our REQUEST_PASSWORD.
Following the instructions here we’ll end up with the files
* request.csr
* request.pfx

The pfx file has our private key embedded in it. These files need to be submitted to GoDaddy.

When the request is processed, GoDaddy will send us certificate files. These are randomly named, something like:

Extract the Private Key from the Certificate Request

We need the private key in the certificate request as a .key file. To do this we need to install OpenSSL. It can be installed as part of cygwin.

Generate the key via (where $ is the cygwin bash prompt):

$ openssl pkcs12 -in request.pfx -nocerts -out request.key.pem -nodes
Enter Import Password: REQUEST_PASSWORD 

The key will be in request.key.pem

Create a PVK File

Next we need to create a PVK file. For this we need pvk.exe.


PS C:\tmp\ssl> .\pvk.exe -in .\request.key.pem -topvk -strong -out cert.key.pvk
Enter Password: EXPORT_PASSWORD 
Verifying - Enter Password: EXPORT_PASSWORD 

This generates cert.key.pvk

Combined the PVK and SPC into a PFX

Installers are signed with a PFX file which is a combination of the key and certificate. For this we need pvk2pfx.exe.


pvk2pfx.exe -pvk cert.key.pvk -pi EXPORT_PASSWORD -spc SOMERANDOM-SHA2.spc -pfx codesign.pfx -po SIGNING_PASSWORD -f

This generates codesign.pfx. This, along with SIGNING_PASSWORD is what we need to sign the MSI. When the code signing certificate expires we’ll need to repeat the steps above.

Signing the Installer

Once we have the PFX and the signing password, we can sign the installer. For this we need signtool.exe.

The command to sign the installer is:

.\signtool.exe sign /f .\codesign.pfx /p SIGNING_PASSWORD /d "(some description)" /tr /v "PATH_TO_MSI"

There are other timeservers you can use.