Clash of Styles, Part #2 – Operations Matrix via FP

Intro

In the first part of this series, I introduced the Operations Matrix, and we saw how Object-Oriented programming leads us to implement our program “by rows.” In other terms, every object implements every operation on itself. It’s time to contrast that with the Functional Programming approach.

If you haven’t read the preceding post, you will be missing some context. Although not strictly mandatory, I believe you’ll gain a lot more insight if you’re coming from the previous article as it sets the ground for the whole discussion.

I hope by the end of this article, you will appreciate how diametrical OOP and FP are and how they encourage us to decompose our programs in precisely the opposite way. It’s very much a matter of different perspectives, different ways to look at the same matrix. The final goal stays the same, though – implement the requirements by filling out all the cells.

Note: This series of articles is very much influenced by some of the core ideas presented in the excellent three-part course “Programming Languages” on Coursera. I encourage you to read my review and pick up the course yourself.

Additionally, you can read through Chapter 6 – “Objects and Data Structures” of the bestseller Clean Code by Uncle Bob. Some of the concepts presented here are also discussed in this great book. By the way, you should read the book anyway if you haven’t already.

The Functional Perspective of the Operations Matrix

Let’s recall the Operations Matrix for our simple interpreter:

We agreed that, no matter what programming style we choose, we need to fill out all the cells in the grid, those question marks. In Part 1, we implemented the matrix “by rows” following the OOP standards. In case you’ve forgotten, here is how the Operations Matrix looks like from OOP perspective:

Not surprisingly, the functional view is precisely the opposite:

This comparison is crucial for our discussion so let’s express it in words:

With OOP, every expression type has an implementation for every operation.

With FP, every operation has an implementation for every expression type.

After all of these conceptual parallels, it’s time to start moving towards the actual functional implementation.

Function per Operation

We are saying that, by adhering to the functional style, every operation will contain implementation for every expression type. What does that mean exactly? Well, in FP, every operation is represented by, you guessed it, a function. This means that we’ll have one function for every operation taking as input an expression. And depending on the exact type of that expression, Integer Constant or Addition from our case, the function will execute differently.

The last sentence makes many people wrinkle their noses. In OOP, we avoid checking for types. This is considered a code smell. Instead, we find a way to express the operation in terms of the object itself. There are even well-known refactorings and patterns that address this. For example, Replace Conditional with Polymorphism from the canonical Refactoring book by Martin Fowler.

However, that’s not the only valid way to decompose your program into more manageable pieces. Let’s start exploring this idea by implementing the Operations Matrix from a functional perspective in C#.

FP Implementation of the Operations Matrix via C#

In the functional world, data is just grouped together in simple containers. Robert C. Martin calls them “Data Structures” in Clean Code. Think of those as POCOs/POJOs or whatever the right abbreviation for your language is. This is precisely the opposite of “real” objects, where the data is encapsulated behind a well defined public interface.

In the context of FP implementation for our interpreter, this means we’ll have simple POCO objects for our MyInt and Addition expressions. I would also prefer to introduce a marker IExpression interface without any methods for them to “implement.”

public interface IExpression
{
}

This interface is not strictly mandatory, but I find it better from a domain modeling perspective.

Here is the Integer Constant definition:

public class MyInt : IExpression
{
    public int Val { get; }
    
    public MyInt(int val)
    {
        Val = val;
    }
}

And the Addition:

public class Addition : IExpression
{
    public IExpression Operand1 { get; }
    public IExpression Operand2 { get; }

    public Addition(IExpression operand1, IExpression operand2)
    {
        Operand1 = operand1;
        Operand2 = operand2;
    }
}

Having our POCOs in place, we can get to the more exciting part of implementing the Eval() and Stringify() functions. They will receive IExpression as input and act accordingly based on its’ exact type.

Let’s see Eval() first:

public static int Eval(IExpression expression)
{
    return expression switch
    {
        Addition addition => Eval(addition.Operand1) + Eval(addition.Operand2),
        MyInt myInt => myInt.Val
    };
}

I guess the implementation is quite self-explanatory. We pattern match the underlying expression, which, in our example, can be Addition or MyInt. And based on that, we proceed differently with the evaluation.

From a high-level perspective, the implementation is identical to the OOP one. For Addition, we evaluate the subexpressions and add them together. The Integer Constant just returns itself. The main difference has structural nature. Here, the first-class construct is the function. In OOP, it was the object.

After the discussion above, the Stringify() implementation is exactly what you’d expect:

public static string Stringify(IExpression expression)
{
    return expression switch
    {
        Addition addition => $"{Stringify(addition.Operand1)} + {Stringify(addition.Operand2)}",
        MyInt myInt => myInt.Val.ToString()
    };
}

Here is a sample driver code:

var expression = new Addition(
    new MyInt(3),
    new Addition(new MyInt(4), new MyInt(5)));

var asString = Stringify(expression);
var result = Eval(expression);

Console.WriteLine($"{asString} = {result}"); // 3 + 4 + 5 = 12

With that, our functional implementation in C# is completed.

Although I think this solution is entirely legal, it does have some drawbacks. Let’s see what I mean in the next section.

The C# Inconveniences and Alternatives

In the OOP implementation, the IExpression interface declared the Eval() and Stringify() operations. This “forces” MyInt and Addition classes to define them. We can’t possibly forget implementing any of the operations because the program would simply not compile.

Unfortunately, we won’t get such type of help from the compiler with our functional approach. If someone adds a new class that implements our marker IExpression interface, nothing will notify us that we’re missing a case in our pattern match (switch) statement. This is not surprising. Interfaces and inheritance, in general, are not primarily meant to be used for such scenarios. They shine when we use them for natural OOP idioms.

We need something else – a data type with a “one of” semantics. This will allow us to declare our expression type as “one of” two things – Integer Constant or Addition Expression. Such a data type is called Discriminated Union or just Union. Unfortunately, we don’t have this in C#, although there are some quite smart attempts to simulate it.

That’s why I’ll switch to F#, where Discriminated Unions are first-class citizens, and combined with pattern matching, we get the elegant, functional solution we are chasing.

F# Implementation of the Operations Matrix

If that’s the first time you see F#, the syntax and code structure can be a little weird, especially if you haven’t dealt with any other functional language. Even if so, the code is very minimalistic, and I’m quite sure you’ll be able to comprehend its’ semantics.

Conceptually, what we’re about to see is very similar to the pattern matching expressions we used in our previous implementation with C#. The main difference will be that our input expression will be a Union instead of an object implementing the empty IExpression interface.

Concretely, here is our Expression type defined as a Discriminated Union:

type Expression =
    | MyInt of int
    | Addition of Expression * Expression

Here we say that an expression can be one of two things:

  • MyInt – representing a simple integer constant
  • Addition – representing a pair of compound expressions

With the Expression type defined, we can continue with implementing the operations.

Let’s start with eval:

let rec eval expression =
    match expression with
    | MyInt i -> i
    | Addition (op1, op2) -> eval op1 + eval op2

I’m sure you find this pattern matching very similar to what we did in C#. However, modeling the expression as a union makes the semantics a lot cleaner. We also get the advantage of the compiler giving us hints when we forget a case.

For, example, if we skip the Addition case like so:

We receive the following warning when building our program:

Program.fs(8, 11): [FS0025] Incomplete pattern matches on this expression. For example, the value ‘Addition (_, _)’ may indicate a case not covered by the pattern(s).

Let’s see the stringify implementation for completeness:

let rec stringify expression =
    match expression with
    | MyInt i -> i.ToString()
    | Addition (op1, op2) -> stringify(op1) + " + " + stringify(op2)

And a sample driver code:

let expression = Addition(MyInt 3,
                          Addition(MyInt 4, MyInt 5))

let asString = stringify expression
let result = eval expression

Console.WriteLine(asString + " = " + result.ToString()) // 3 + 4 + 5 = 12

Summary

That was the functional implementation of the Operations Matrix. I hope that by now, you’ve developed some very practical sense of how OOP and FP compare to each other fundamentally.

In the next part of these series, we’ll start reviewing extensibility. The material so far is not just “OOP vs. FP” contrast per se. It lays the ground for a more in-depth conversation on how those two approaches make specific changes in our programs easy or hard.

In the next post, we’ll start extending our Operations Matrix “horizontally” and “vertically.” By doing that, we’ll see under what conditions a specific programming style fits better than the other one. To me, this is quite important. It’s not about personal preferences. Depending on your choice, your program can be more (or less) maintainable. Which is what every software development practitioner cares about, right?

Upcoming in the series are more advanced topics like Visitors, Double Dispatch, Extensibility, and many more, so stay tuned!

Thanks for reading.

Resources

  1. Programming Languages, Part A
  2. Programming Languages, Part B
  3. Programming Languages, Part C
  4. Clean Code
  5. Refactoring
  6. https://stackoverflow.com/questions/3151702/discriminated-union-in-c-sharp
  7. http://astreltsov.com/software-development/discriminated-unions-in-c-sharp-dot-net.html
  8. http://boustrophedonic.com/blog/2012/10/21/union-types-in-csharp/
  9. https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/discriminated-unions
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Robin
Robin
3 years ago

Great set of articles! Looking forward to the next one.

Site Footer

Subscribe To My Newsletter

Email address