Dogs Chasing Squirrels

A software development blog

React and .NET Core WebAPI with F# Part 1: React

0

I’m going to go through a step-by-step guide to getting React and .NET Core WebAPI working together. In this guide I’m going to try to document everything so there are no hidden steps and very little assumed knowledge.

In this first part, I’m just going to get a vanilla solution working with TypeScript and React.

Create the solution and project directories

We’re going to have the layout of a standard Visual Studio solution here, so create a folder for the solution, e.g. ReactWebApiDemo and then under that a folder for our web project, e.g. ReactWebApiDemo again.

Install npm

Most modern web projects use the Node.js package manager, npm, so the first step is to install it from either of the provided links. I’ll note that yarn is a possible alternative to npm and you’re welcome to try it instead, though the usage will be slightly different. At the time of writing, the npm version was 5.6.0.

First we need to initialize our project with

npm init

This gives the following:

PS C:\Projects\ReactWebApiDemo\ReactWebApiDemo> npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install ` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name: (reactwebapidemo)
version: (1.0.0)
description: React WebAPI Demo
entry point: (index.js)
test command:
git repository:
keywords:
author:
license: (ISC)
About to write to C:\Projects\ReactWebApiDemo\ReactWebApiDemo\package.json:

{
  "name": "reactwebapidemo",
  "version": "1.0.0",
  "description": "React WebAPI Demo",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}


Is this ok? (yes) yes

After that, the syntax for installing packages that will end up in production code (like React) is:

npm install {module name}

or, for libraries that are used in development but will not end up in production:

npm install --save-dev {module name}

"–save-dev" can be replaced by "-D", e.g.

npm install -D {module name}

Install and Configure Webpack

The first thing we're going to set up is webpack. Webpack is a modular utility for minimizing and transforming our other files to make them production ready. We'll use it to:
* Convert TypeScript (.ts) files to JavaScript (.js) files
* Convert React TypeScript (.tsx) files to JavaScript (.js) files
* Less CSS (.less) files to CSS (.css) files.
* Pack our JavaScript and CSS files together with those of the third-party libraries we're using.
* Minimize and compress our JavaScript and CSS files.

The first step is to install webpack and its command-line interface, webpack-cli, which we can do with npm:

PS C:\Projects\ReactWebApiDemo\ReactWebApiDemo> npm install --save-dev webpack webpack-cli
[..................] - fetchMetadata: sill resolveWithNewModule webpack@4.10.2 checking installable status
  • webpack – The package and minimization utility.
  • webpack-cli – The webpack command line interface.

Webpack runs off a configuration file, webpack.config.js. This is the barest of configuration files to start with:

// Let us use the core webpack module as a library
const webpack = require( 'webpack' );
// Let us use the built-in webpack path module as a library
const path = require( 'path' );


module.exports = {
    // Entry: a.k.a. "Entry Point", the JS file that will used to build the JavaScript dependency graph
    // If not specified, src/index.js is the default.
    entry: "./src/index.js",
    // Output: Where we'll output the build files.
    output: {
        // Path: The directory to which we'll write transformed files
        path: path.resolve( __dirname, 'dist' ),
        // Filename: The name to which we'll write our bundled JavaScript.
        // If not specified, dist/main.js is the default.
        filename: 'main.js'
    },
    // The processing mode.  Accepted alues are "development", "production", or "none".
    mode: 'production'
}

You'll note that it needs an input file at "src/input.js" and an output directory at "dist", so create those in the web project.
After that we can run it with:

node .\node_modules\webpack\bin\webpack.js

e.g.

PS C:\Projects\ReactWebApiDemo\ReactWebApiDemo> node .\node_modules\webpack\bin\webpack.js
Hash: c4097b5edb272ec4b73c
Version: webpack 4.10.2
Time: 125ms
Built at: 2018-06-03 20:48:45
    Asset       Size  Chunks             Chunk Names
bundle.js  930 bytes       0  [emitted]  main
[0] ./src/index.js 0 bytes {0} [built]

We can make this easier on ourselves by setting this command up in the "scripts" section of package.json. E.g.

  "scripts": {
    "debug": "node ./node_modules/webpack/bin/webpack.js"
  },

Then:

npm run-script debug

or

npm run debug

We can make this even easier by putting the mode in the scripts and simplifying the scripts to just call "webpack" as in:

  "scripts": {
    "debug": "webpack --mode none",
    "dev": "webpack --mode development",
    "release": "webpack --mode production"
  },

We're going to make sure we have this working, so for now, change input.js to:

function test() {
}

Run npm run-script debug. It should create “dist\bundle.js”. Open it up. You should see some webpack overhead stuff and our test() method at the bottom.

One last change we can make is to let webpack know about common extensions so we can import files as just “import ‘./blah'” and not “import ‘./blah.js'”.
Add the following after the “module” section:

    resolve: {
        extensions: ['.js', '.ts', '.jsx', '.tsx', '.json']
    }

Setting up Less CSS

Run

npm install --save-dev less less-loader css-loader style-loader
  • less – The Less CSS library.
  • less-loader – The webpack module for Less-to-CSS conversion.
  • css-loader – The webpack module that allows us to import CSS into JavaScript.
  • style-loader – The webpack module that, with css-loader, lets us import styles into JavaScript.

Following the instructions on the less-loader site, we add this to webpack.config.js:

    // Define our modules here
    module : {
        rules: [
            { 
                test: /\.less$/, // Match all *.less files
                use: [{
                    loader: 'style-loader' // creates style nodes from JS strings
                  }, {
                    loader: 'css-loader' // translates CSS into CommonJS
                  }, {
                    loader: 'less-loader' // compiles Less to CSS
                  }]
            }
        ]
    }

Add a less file to src, e.g. “site.less”.

body {
    font-family: 'Times New Roman', Times, serif;
}

Have index.js import the style from the file, e.g.

import style from './site.less'

If you run webpack again, you’ll see bundle.js get updated.
You can test that the style is applied by creating a small HTML file, e.g.

<html>
    <head>
        <meta charset="utf-8">
        <title>Test</title>
        <a href="http://../dist/bundle.js">http://../dist/bundle.js</a>
    </head>
    <body>
        <h1>Test</h1>
    </body>
</html>

If you load the file in a browser, you’ll see the CSS is used.

Setting up TypeScript

I’m basically following the instructions here except we’re going to use awesome-typescript-loader instead. Once again, we start by installing the prerequisites. Run

npm install --save-dev typescript awesome-typescript-loader

TypeScript needs its own config file, tsconfig.json:

{
    "compilerOptions": {
        "outDir": "./dist/",
        "noImplicitAny": true,
        "module": "es6",
        "moduleResolution": "node",
        "target": "es5",
        "jsx": "react",
        "allowJs": true
    }
}

Note “moduleResolution”:”node”. This will allow us to use “import from ‘blah'” to import node modules.

In webpack.config.js we need the module definition:

            // Typescript
            {
                test: /\.tsx?$/, // Match *.ts and *.tsx
                use: 'awesome-typescript-loader', // Converts TypeScript to JavaScript
                exclude: /node_modules/ // Don't look in NPM's node_modules
            }

We can test it by putting a TypeScript file in the src folder. This is the TypeScript straight from the “TypeScript in 5 minutes” tutorial:

interface Person {
    firstName: string;
    lastName: string;
}

export function greeter(person: Person) {
    return "Hello, " + person.firstName + " " + person.lastName;
}

Then reference it in index.js:

import greeter from './test.ts';

If you run webpack again, you’ll see the greeter code added to bundle.js.

Setting up React

Again, install react and react-dom with npm. No “–save-dev” this time – these are going in production!

npm install react react-dom

We’re also going to want to use babel to let us use EC6+ features in EC5 browsers. Start by installing babel (Dev only):

npm install --save-dev babel-core babel-loader 

We then install babel presets to tell it the plugins to set up.

npm install --save-dev babel-preset-env babel-preset-react

We need to set the babel loader up in our webpack.config.js:

            // Babel
            {
                test: /\.jsx?$/, // Match *.js and *.jsx
                use: 'babel-loader', // Converts ES2015+ JavaScript to browser-compatible JS
                exclude: /node_modules/ // Don't look in NPM's node_modules
            }

And babel needs its own configuration file, .babelrc, to tell it about the plugins:

{
    "presets": ["env", "react"]
}

Let’s test this out. We’ll change our index.js to include the React code given in the React tutorial:

import style from './site.less';
import React from "react";
import ReactDOM from "react-dom";

class ShoppingList extends React.Component {
    render() {
        return (
            <div>
                <h1>Shopping List for {this.props.name}</h1>
                <ul>
                    <li>Instagram</li>
                    <li>WhatsApp</li>
                    <li>Oculus</li>
                </ul>
            </div>
        );
    }
}

function renderShoppingList() {
    ReactDOM.render(
        <ShoppingList />,
        document.getElementById('shopping-list')
    );
}

window.onload = renderShoppingList;

If we load index.html we’ll see the React component render.

Let’s try loading files from a JSX. We’ll save this as ShoppingList.jsx:

import React from "react";
import ReactDOM from "react-dom";

class ShoppingList extends React.Component {
    render() {
      return (
        <div>
          <h1>Shopping List for {this.props.name}</h1>
          <ul>
            <li>A</li>
            <li>B</li>
            <li>C</li>
          </ul>
        </div>
      );
    }
  }

  module.exports = {
    renderShoppingList : function() {
      console.log( "renderShoppingList" );
      ReactDOM.render(
          <ShoppingList />,
          document.getElementById('shopping-list')
        );
    }
  }

Then modify our index.js like so:

import style from './site.less';
import React from "react";
import ReactDOM from "react-dom";

var shoppingList = require( "./ShoppingList" );

function test() {
    console.log( "test" );
    shoppingList.renderShoppingList();
}
window.onload = test;

And put the ID it requires in index.html:

    <body>
        <h1>Test</h1>
        <div id="shopping-list"></div>
    </body>

Now if we load our index.html in the browser we’ll see our React component.

Finally, let’s see if we can get this working with a TSX file.

For TypeScript to be able to import node modules properly we need to import the TypeScript type packages.

npm install --save-dev @types/react @types/react-dom

If we fail to do this we’ll get errors like “TS7016: Could not find a declaration file for module ‘react-dom'”.

Let’s make another component called AnotherComponent.tsx with the following code:

import * as React from 'react';
import * as ReactDOM from 'react-dom';

class AnotherComponent extends React.Component {
    public render() {
        return (
            <div>
                <h1>TSX</h1>
            </div>
        );
    }
}


export function renderAnotherComponent() {
    console.log("renderAnotherComponent");
    ReactDOM.render(
        <AnotherComponent />,
        document.getElementById('another-component')
    );
}

Now change index.js to call it:

<br />var shoppingList = require( "./ShoppingList" );
var anotherComponent = require( "./AnotherComponent" );

function test() {
    console.log( "test" );
    shoppingList.renderShoppingList();
    anotherComponent.renderAnotherComponent();
}
window.onload = test;

And put the ID it requires in index.html:

    <body>
        <h1>Test</h1>
        <div id="shopping-list"></div>
        <div id="another-component"></div>
    </body>

Now if we load index.html in a browser we’ll see our JSX component and our TSX component.

VBA for Engineers

0

I make oil and gas software and work with engineers a lot. I am one myself, by education. Engineers like to use Excel and if they remember the little programming they took in university, they like to extend what Excel can do by using Visual Basic for Applications, or VBA. VBA is terrible. It’s based on VB6 which is also terrible. Together, VB6, VBA, and VB.Net make up three of the top five most dreaded languages in the most recent Stack Overflow Developer Survey.

The saying goes that when the only tool you have is a hammer, every problem looks like a nail. Engineers’ tool is VBA so you find them using it where it’s not appropriate or useful. Moreover, they’re amateur programmers so to a professional software developer like me, opening up a macro-enabled Excel file is like a carpenter walking up to a woodworking project and seeing a screw that’s been pounded double surrounded by a bunch of circular indentations.

They still ask me for help with their Visual Basic problems and rather than turn them away I’ve finally said that I’m glad to help. After all, the first step in getting help with a VBA problem is admitting you have a VBA problem. To that end, I’ve tried to establish a 12-step program based on the traditional 12-step programs like AA. VBAA, if you will.

AA VBAA
We admitted we were powerless over alcohol—that our lives had become unmanageable. We admitted we were powerless over VBA—that our Excel-based workflow had become unmanageable.
Came to believe that a Power greater than ourselves could restore us to sanity. Came to believe that a department greater than ourselves could restore us to sanity.
Made a decision to turn our will and our lives over to the care of God as we understood Him. Made a decision to turn our will and our lives over to the care of IT as we understood them.
Made a searching and fearless moral inventory of ourselves. Made a searching and fearless moral inventory of our Excel-based workflow.
Admitted to God, to ourselves, and to another human being the exact nature of our wrongs. Admitted to IT, to ourselves, and to another human being the exact nature of our wrongs.
Were entirely ready to have God remove all these defects of character. Were entirely ready to have IT remove all these defects of character.
Humbly asked Him to remove our shortcomings. Humbly asked IT to remove our VBA macros and modules.
Made a list of all persons we had harmed, and became willing to make amends to them all. Made a list of all persons we had harmed, and became willing to make amends to them all.
Made direct amends to such people wherever possible, except when to do so would injure them or others. Made direct amends to such people wherever possible, except when to do so would injure them or others.
Continued to take personal inventory, and when we were wrong, promptly admitted it. Continued to take personal inventory, and when we were wrong, promptly admitted it.
Sought through prayer and meditation to improve our conscious contact with God as we understood Him, praying only for knowledge of His will for us and the power to carry that out. Sought through prayer and meditation to improve our conscious contact with IT, praying only for knowledge of better programming languages for us and the power to carry that out.
Having had a spiritual awakening as the result of these steps, we tried to carry this message to alcoholics, and to practice these principles in all our affairs. Having had a spiritual awakening as the result of these steps, we tried to carry this message to other Engineers, and to practice these principles in all our affairs.

I encourage all engineers suffering from a VBA problem to seek a sponsor in their IT’s software development department.

Mocking delegates with Moq

0

Using Delegates

In C#, a delegate is a function signature that can be passed around as a parameter. This is a delegate that takes a couple of parameters and returns a value:

public delegate int DoSomething( double x, string y );

This is a method that puts it to work with Invoke:

    public int CallDelegate( DoSomething doSomething, double x, string y ) {
        return doSomething?.Invoke( x, y ) ?? 0;
    }

You don’t need to use Invoke, you can use it directly via:

        return doSomething( x, y );

but Invoke is nice because you can guard against nullables.

Mocking Delegates

When unit testing with Moq, you may find yourself wanting to mock a delegate or to verify that it was called. It’s straightforward, just make sure you mock the method itself and not Invoke:

        [TestMethod]
        public void TestMethod() {

            var mockDoSomething = new Mock<MyClass.DoSomething>();
            mockDoSomething.Setup( _ => _( It.IsAny<double>(), It.IsAny<string>() ) ).Returns( 5 );
            // NOT
            // mockDoSomething.Setup( _ => _.Invoke( It.IsAny<double>(), It.IsAny<string>() ) ).Returns( 5 );
            var subject = new MyClass();
            var result = subject.CallDelegate( mockDoSomething.Object, 1.1, "x" );
            Assert.AreEqual( 5, result );
            mockDoSomething.Verify( _ => _( 1.1, "x" ), Times.Once );
            mockDoSomething.Verify( _ => _.Invoke( 1.1, "x" ), Times.Once );            
        }
    }

If you try to mock Invoke itself, you’ll get an error like:

System.InvalidCastException: Unable to cast object of type ‘System.Linq.Expressions.InstanceMethodCallExpressionN’ to type ‘System.Linq.Expressions.InvocationExpression’.

Wrapping the Legend in SciChart

0

I’ve been using SciChart as a real-time graphing control. There’s documentation on the website about how to make the legend wrap when it’s too long but it’s not clear. It seems you can’t do it with LegendModifier alone and while you can do it with SciChartLegend they don’t make it clear where the control is supposed to go.

Anyway, this is what works:

<s:SciChartSurface.ChartModifier>
  <s:ModifierGroup>
    <s:LegendModifier                                     
      x:Name="LegendModifier"
      ShowLegend="True"
      ShowVisibilityCheckboxes="False"
      Orientation="Horizontal"
      Margin="0,10,0,10"
      LegendPlacement="Top" 
      >
      <s:LegendModifier.LegendTemplate>
        <ControlTemplate TargetType="s:LegendPlaceholder">
          <s:SciChartLegend
            Orientation="Horizontal"
            LegendData="{Binding LegendData, ElementName=LegendModifier}" 
            >
            <s:SciChartLegend.ItemsPanel>
              <ItemsPanelTemplate>
                <WrapPanel Orientation="{Binding Orientation, RelativeSource={RelativeSource AncestorType=s:SciChartLegend}}" />
              </ItemsPanelTemplate>
            </s:SciChartLegend.ItemsPanel>
          </s:SciChartLegend>
        </ControlTemplate>
      </s:LegendModifier.LegendTemplate>
    </s:LegendModifier>
  </s:ModifierGroup>
</s:SciChartSurface.ChartModifier>

Setting up a TFS 2017 Build Server’s Account

0

We have an on-premises TFS 2017 server with the package management plugin installed to host custom NuGet packages.

I happily set up TFS builds of my solution. I happily set up custom NuGet packages. Then I ran a build of a solution that made use of my custom packages. Imagine my surprise when the TFS build server was unable to download packages from its own TFS server! Furthermore, the error code was the rarely-seen “402 Payment Required”.

Now, it turns out that Package Management generally requires licenses. This hadn’t mattered because it’s free for Visual Studio Enterprise subscribers, which we all have through MSDN. The build server, however runs under its own service account which naturally doesn’t have an MSDN subscription.

How to have the build server run as a user with an MSDN subscription? After a support call, a guy from Microsoft helped me figure it out. Here it is for everybody with the same problem (and, for me, when the solution below expires in a year).

The solution

Basically, the solution is to get the build server to run under the Personal Access Token (PAT) of a user with an MSDN license.

Step 1. Generate a Personal Access Token

If you log into TFS 2017, in the corner under the settings there’s an “Access Tokens” setting.

Click “Add”.

Create a token. I made mine for a year, ensuring I would forget all this by the time it expires and that a year from now I’ll be confounded when all my builds break.

The personal access token will be a long string. Save it somewhere temporarily.

Step 2. Configure the build agent to use the PAT

If you already have the build agent set up, you’re going to have to remove it. Go to the folder and run:

.\config.cmd remove

It’s safest to delete the whole folder and recreate it from the Agent you downloaded from TFS. Until I removed the folder entirely this fix failed.

Once you’ve recreated the server, type

.\config.cmd

from PowerShell to start the process again.

This time, when it gets to “Enter authentication type (press enter for Integrated)” enter “PAT”.
It will ask you for the token. Enter the long string you got above.
Continue as normal.

Other Gotchas

The server will download NuGet packages with the license you’ve set up above but with the authorization of the user that the server is running as. If you find you’ve traded “Payment Required” for “Unauthorized”, make sure the build agent user has access as a package reader in Package Management’s Security settings.

Akka.NET and Unity IoC

0

Even though Akka.NET supposedly supports dependency injection through its DI libraries (e.g. Akka.DI.Unity), it’s not very good. In the documentation you’ll see examples like:

Context.DI().Props<MyActor>()

That Props call doesn’t take any arguments so there’s no way to actually inject parameters into the call. Let’s say we have this actor:

public class MyActor : ReceiveActor {
  public MyActor( string a, IFoo b, IBar c ) {
    // ...
  }
}

Further say we want to specify a but let Unity resolve IFoo and IBar from configuration. If we were just creating the object normally we would write:

var myActor = UnityContainer.Resolve<MyActor>( 
  new ParameterOverride( "a", "ABC" ) 
  );

But that’s not how we create actors. We create actors using ActorOf and specify Props. We do this so that Akka.Net can recreate the actors if they fail.

If we were to create the actor normally without dependency injection we would write something like:

var myActorRef = actorRefFactory.ActorOf( 
  Props.Create( () => new MyActor( "ABC", ??, ?? ) 
  );

If we do that, we aren’t getting our interfaces from Unity.

So let’s try dependency injection. If we were to use Akka.DI.Unity we would have:

var myActorRef = actorRefFactory.ActorOf(
  Context.DI.Props<MyActor>() 
  );

But then there’s no way to specify our parameter.
We’re stuck. We can specify all parameters or none but can’t specify some and let unity take care of the others like we usually want.

Here’s a way you can do it that doesn’t involve the Akka.DI libraries:

IIndirectActorProducer

It turns out that Props is not the only way to create an actor. If you dive into the source code, you can see that there’s a IIndirectActorProducer class that will create an actor without Props by implementing a Produce method.
Here’s an implementation that takes a IUnityContainer and the ResolverOverrides in its constructor and uses them to create the actor when called:

    /// <summary>
    /// A <see cref="IIndirectActorProducer" /> that uses a <see cref="IUnityContainer" /> to resolve instances.
    /// </summary>
    /// <remarks>
    /// This is only used directly by the <see cref="UnityActorRefFactory"/>
    /// </remarks>
    /// <typeparam name="TActor"></typeparam>
    internal sealed class UnityActorProducer<TActor> : IIndirectActorProducer where TActor : ActorBase {

        /// <summary>
        /// The resolver overrides.
        /// </summary>
        private readonly ResolverOverride[] _resolverOverrides;
        /// <summary>
        /// The unity container.
        /// </summary>
        private readonly IUnityContainer _unityContainer;

        /// <summary>
        /// The constructor.
        /// </summary>
        /// <param name="unityContainer">The unity container.</param>
        /// <param name="resolverOverrides">The resolver overrides.</param>
        public UnityActorProducer( IUnityContainer unityContainer, params ResolverOverride[] resolverOverrides ) {
            this._unityContainer = unityContainer.CreateChildContainer();
            this._resolverOverrides = resolverOverrides;
        }

        /// <summary>
        /// See <see cref="IIndirectActorProducer.ActorType"/>
        /// </summary>
        public Type ActorType => typeof( TActor );

        /// <summary>
        /// See <see cref="IIndirectActorProducer.Produce" />
        /// </summary>
        /// <returns></returns>
        public ActorBase Produce() {
            // Create the actor using our overrides
            return this._unityContainer.Resolve<TActor>( this._resolverOverrides );
        }

        /// <summary>
        /// SEe <see cref="IIndirectActorProducer.Release" />
        /// </summary>
        /// <param name="actor"></param>
        public void Release( ActorBase actor ) {
            // Do nothing
        }

    }

We can create Props using the producer like this:

        /// <summary>
        /// Creates <see cref="Akka.Actor.Props" /> using the <see cref="IUnityContainer" /> and any <see cref="ResolverOverride" />s provided.
        /// </summary>
        /// <typeparam name="TActor"></typeparam>
        /// <param name="unityContainer"></param>
        /// <param name="resolverOverrides"></param>
        /// <returns></returns>
        public Props Props<TActor>( IUnityContainer unityContainer, params ResolverOverride[] resolverOverrides ) where TActor : ActorBase {
            // Use a UnityActorProducer to create the object using the container and resolver overrides.
            return Akka.Actor.Props.CreateBy<UnityActorProducer<TActor>>( 
                unityContainer, 
                resolverOverrides 
                );
        }

So creating our actor would look like this:

var myActorRef = Context.ActorOf(
  Props<MyActor>( 
    unityContainer, 
    new ParameterOverride( "a", "ABC" ) 
  )
  );

It uses our resolver overrides for the first parameter and the unity configuration for our additional parameters, just as we want.

Microsoft and Xamarin, Google and Swift

0

Xamarin

A few months ago I was looking at mobile app development again and so had a look at Xamarin, a cross-platform app toolset built around .NET. It seemed interesting, but it was hugely expensive. Something like $600 per year per license. When Android Studio is free and XCode is either free or $99, it was cheaper to go native.

Microsoft Buys Xamarin

Now Microsoft has acquired Xamarin and it will be free to use. So now you can theoretically write C# code and have it build (with some additional platform-specific effort) Android and iOS (and Windows Phone, if anyone cares) apps cheaply.

Google needs a new first-class language… Swift?

With Oracle constantly trying to squeeze money out of Google for their use of Java on Android, Google very obviously needs a new first-class language for Android – one that can eventually replace Java completely. Reportedly they’re considering using Apple’s Swift. Although I’ve spent only a little time with Swift, it seems like an excellent language. Like F#, it’s a hybrid functional language with objects.

Write once… in C# or Swift?

Xamarin’s supposed advantage is that you can write in one language, C#, and compile to iOS or Android. What if Swift becomes the native language for both platforms? Someone will create libraries to cross-compile the UI elements. What need is there then for Xamarin? Swift is a more modern language. Though Xamarin theoretically supports all .NET languages and thus would support F#, we know F# is a second-class citizen. Xamarin would be solely for those .NET developers who are unable to move on to new languages. If Google does adopt Swift for Android, Xamarin will become the mobile equivalent of WebForms.

F# and Swift

0

I’m looking at Apple’s Swift language and it’s interesting to see how similar it is to F# (which itself is similar to Scala and Haskell). I think this is all in reaction to the trouble we’re having in general with pure object-oriented programming and the advantages of a functional (or at least hybrid) approach in the age of distributed systems.

Value declaration

This is some F# for assigning variables:

let someConstant = 10
let mutable someVariable = 1.23
let specifyTypeExplicitly : String = "blah"

and in Swift:

let someConstant = 10 
var someVariable = 1.23
var specifyTypeExplicitly : Double = "blah"

let mutable for var, otherwise the same. Types are generally inferred but can be set explicitly.

Both languages have tuples and dictionaries assigned in basically the same way.

Pattern matching

Instead of F#’s match, Swift puts its matching inside a traditional switch statement.

//let rgba = ( 1.0, 1.0, 1.0, 1.0 ) "white"
//let rgba = ( 0.4, 0.4, 0.4, 1.0 ) "gray"
//let rgba = ( 0.0, 0.6, 0.8, 1.0 ) "blue is 0.8"
let rgba = ( 0.0, 0.6, 0.8, 1.0 )
switch rgba {
case ( 1.0, 1.0, 1.0, 1.0 ):
    print( "white" )
case let ( r, g, b, 1.0) where r==g && g==b:
    print("gray")
case (0.0, 0.5...1.0, let b, _):
    print("blue is \(b)")
default:
    break
}

In F#, this would be something like (this is from memory):

//let rgba = ( 1.0, 1.0, 1.0, 1.0 ) "white"
//let rgba = ( 0.4, 0.4, 0.4, 1.0 ) "gray"
//let rgba = ( 0.0, 0.6, 0.8, 1.0 ) "blue is 0.800000"
let rgba = ( 0.0, 0.6, 0.8, 1.0 )
match rgba with
| ( 1.0, 1.0, 1.0, 1.0 ) ->
    printfn "white"
| ( r, g, b, 1.0 ) when ( r = g ) && ( g = b ) ->
    printfn "gray"
| ( 0.0, g, b, _ ) when ( g >= 0.5 ) && ( g <= 1.0 ) ->
    printfn "blue is %f" b 
| _ -> ()

Some and None

Swift and F# both support the Some and None keywords for optional values. e.g.

var x : Int? = nil
switch x {
case .Some( let value ):
    print("x has a value")
case .None:

In F#:

let x : int option = None
match x with
| Some(value) ->
    printfn "x has a value"
| None ->
    printfn "x is nil"

Currying

In my first look at the language, currying doesn’t seem quite as nice.
Here’s some F#:

let add a b = a + b
let add2 = add 2
let result = add2 3 // 5

In Swift (in the tersest way I know how at present) :

let add : (Int,Int) -> Int = { $0 + $1 }
let add2 = { add( 2, $0 ) }
let result = add2( 3 ) // 5

Another way (according to this) is to make the add function return a function like in the example below, but then doing a call to the initial function does not look like a standard function call:

func add (a:Int)(_ b:Int)-> Int {
    return a + b
}
add(2)(3) // not add(2,3) // 5
let add2 = add(2)
add2(3) // 5

Conclusion

This is a very shallow comparison. I just thought it was interesting.

Visual Studio Code Coverage: Value does not fall within the expected range

0

While attempting to improve my code coverage in Visual Studio 2014, I suddenly found myself getting the error “Value does not fall within the expected range” when running unit tests with code coverage enabled. Without code coverage, the tests ran fine. With code coverage, that error would appear twice in the test output.

I googled the error for awhile and mostly came up empty. I eventually tracked down this post where a user was getting the same message but related to website profiling. It did help me track down the issue though.

The solution

I looked in the configuration manager. It turns out that while most of the projects in my solution were set to the “Any CPU” platform, I had one Wix installer project set to “x86”, presumably because it’s building an x86-compatible installer. If this project is disabled, code coverage works. If re-enabled, I get the error. For now, I’ve just been disabling the program when I want to do code coverage.

Akka.NET, Prism, Unity, and WPF

10

Since I’ve been reading up on Reactive Extensions, I listened with interest to the recent .NET Rocks! podcast on the subject. During the conversation, Akka.NET was mentioned.

Akka.NET is another reactive technology. It’s an implementation of the Actor model. Basically, you break down an application’s functionality into small computational units called Actors which have limited state and communicate via message passing. There are parallels in Rx’s concept of observables and observers. Akka.NET also provides the kind of supervision hierarchy that you find in erlang, which is that an actor has child actors and when those child actors fail, the parent can resume or restart the failed children. This makes the application ‘self healing’.

I watched two Pluralsight course on Akka.NET by Jason Roberts, Building Concurrent Applications with the Actor Model in Akka.NET and Building Reactive Concurrent WPF Applications with Akka.NET (subscriptions required for both links). The former gives an overview of Akka.NET and the latter applies it to an MVVM WPF application.

In the demo MVVM application, the author uses Akka.NET with Ninject dependency injection and the MVVM Light toolkit. Personally, I prefer Unity and Prism, so as an exercise for myself I created the demo using those two languages. The result is on GitHub at https://github.com/mkb137/AkkaPrismUnityDemo.

The project consists of
* Akka.DI.Unity – A copy of Akka.NET’s Akka.DI.Unity upgraded from Unity 3.5 to Unity 4.0. No code changes were required otherwise.
* AkkaPrismUnityDemo – The Prism shell project
* AkkaPrismUnityDemo.Infrastructure – The common infrastructure project (which in this simple demo only contains the region names)
* AkkaPrismUnityDemo.Modules.Stocks – The main module containing the demo code.

A screenshot of the result is shown at the top of this post. In it, the view model creates actors which then have knowledge of the view model that created them and will call methods on the view model when messages are received and processed.

Here a view model creates an actor and passes it a reference to itself:

this.StockToggleButtonActorRef = 
    this.UnityContainer.Resolve<ActorSystem>()
    .ActorOf( 
        Props.Create( () => new StockToggleButtonActor( 
            this.StocksCoordinatorActorRef, 
            this, 
            this.StockSymbol
        ) 
    ) 
);

Here the actor makes a callback to the view model:

this.Receive<ToggleStockMessage>( message => {
    this._stocksCoordinatorActorRef.Tell( new WatchStockMessage( this._stockSymbol ) );
    this._viewModel.UpdateButtonTextToOn();
    this.Become( this.ToggledOn );
} );

Like Rx and its Observables/Observers, messages are handled on the thread pool so actors will make full use of the processors available on the machine with no extra effort from the developer.

It’s an interesting technology. Compared to Rx, Akka is missing some of Rx’s advanced publish/subscribe methods like the ability to buffer or throttle messages. It does have a Sample equivalent via the scheduler’s ScheduleTellRepeatedly method. On the plus side, it provides the supervision hierarchy and provides a more natural message pipeline. With Rx it seems like having an object that is both observer and observable (i.e. is part of a pipeline), while supported, is somehow discouraged.