Back to Blog

Refactoring Using Cognitive Complexity

Andrew Moscardino | December 14th, 2020

Refactoring code is an essential part of the software development process. Sometimes though, it's difficult to determine what should be refactored and why. One measure for determining what is in need of some refactoring is Cognitive Complexity.

Cognitive Complexity is a metric developed by SonarSource, the makers of a code quality tool we use at AWH called SonarQube. SonarQube hooks into a CI pipeline for a project and performs code quality analysis on it. The types of issues that SonarQube can find range from simple things (this variable is never used, remove it) to more complex refactoring (this method is too complex, refactor it). To determine if a method is too complex and should be refactored, a Cognitive Complexity score is calculated.

Defining Complexity

Complexity in programming and computer science has a few meanings. Most programmers may be familiar with Cyclomatic Complexity which attempts to tell us how difficult a method will be to test or maintain. It does this by creating a score for every branch in a method. Like so:

public static string HowMany(int n) // +1 (at least one path through each method)
{
    switch (n)
    {
        case 1: // +1
            return "One";

        case 2: // +1
            return "A couple";

        case 3: // +1
            return "A few";

        case 4: // +1
            return "Several";

        default:
            return "Many";
    }

    // Cyclomatic Complexity = 5
}

While Cyclomatic Complexity does a good job of telling us how complex a method is when it comes to testing (the score is generally the minimum number of tests you will need to cover a method), it doesn't give us a good indication of how maintainable or understandable the code is. Cognitive Complexity builds on and adapts Cyclomatic Complexity to give us a better score that relates more to understanding and maintainability rather than testing.

Note that Cognitive Complexity is not specific to any programming language or style. All the examples in this post use C#.

Calculating Complexity

The first thing you should know about calculating Cognitive Complexity is that you don't need to do it yourself. SonarQube will analyze your code as part of a CI/CD pipeline and is a great tool for more than just Cognitive Complexity analysis. But learning how the score is calculated can be helpful in structuring your code in more maintainable ways.

This post is only going to give the basics of calculating Cognitive Complexity. For the full details, check out SonarSource's white paper.

Here are the basic rules for calculating a Cognitive Complexity score:

  1. Increment the score for every break in the control structure of the code.
  2. Increment the score again for control structure breaks with each level of nesting.
  3. Ignore shorthand constructs that aid readability.

Let's break those down some more.

Breaks in Control Structure

This is the most basic rule of Cyclomatic Complexity. Loops and conditionals increment the score for each break in the flow. There are some caveats, though.

  • try and finally blocks are ignored, but catch will increment the score, though only once no matter how many types of exceptions are caught.
  • switch will only increment the score by 1, regardless of how many cases are handled. This is different from Cyclomatic Complexity and is done because a switch is generally easier to scan and read than a group of equivalent if/else if/else statements.
  • Sequences of logical operators are grouped together. Consider the difference between a && b && c && d and a && b || c && d. The first would increment by 1 where as the second would increment by 3 since it is a lot more difficult to read.
  • Recursion is considered to be a kind of loop and increments the score.
  • Jumps (break, continue, etc) will increment the score, but an early return will not.

Because of that rule about switch statements, the HowMany method above only has a Cognitive Complexity score of 1.

Nesting Control Structures

Excessive nesting is a very nasty code smell. Cognitive Complexity calls this out by incrementing each control structure again for each level it is nested. Jumps are not subject to additional increments for nesting, though.

public string DoSomething(string str)
{
    var newStr = string.Empty;

    if (str == null) // +1
    {
        for (var i == 0; i < str.Length; i++) // +1 (+1 for nesting)
        {
            if (str[i] == 'a') // +1 (+2 for nesting)
                newStr += 'A';
            else if (str[i] == 'x') // +1 (+2 for nesting)
                continue; // +1
            else
                newStr = str[i];
        }
    }

    return newStr;
}

// Cognitive Complexity = 10

Ignore Shorthand Constructs

Cognitive Complexity scores are meant to reward good coding practices and thus shorthand constructs and statements are not counted against the score. Most notably, things like null-coalescing operators are great ways to use the language to reduce complexity, as in the following example:

// Not good, increments the score
var myString == string.Empty;
if (someObj == null)
    myString = someObj.StringProperty;

// Good, does not increment the score
var myString = someObj?.StringProperty ?? string.Empty;
Examples

Let’s use these rules and refactor something. Using what we know, we can identify some easy ways to reduce complexity and make a method easier to read and maintain. Let’s start with a big, nasty method and identify the areas with the highest complexity:

public async Task ParseAsync(string startingUrl, bool recurse = false, bool verbose = false)
{
    startingUrl = CleanUrl(startingUrl);
    _rootUrl = new Uri(startingUrl).GetLeftPart(UriPartial.Authority);

    var urls = new Dictionary<string, int>();
    urls.Add(startingUrl, 0);

    while (urls.Any(x => x.Value == 0))
    {
        // Grab any URLs from the dictionary that we have not checked. If we are recursing,
        // new URLs will be added with a status code of 0 and we will pick them up on the
        // next pass.
        var urlsToProcess = urls.Where(x => x.Value == 0).Select(x => x.Key).ToList();

        foreach (var url in urlsToProcess)
        {
            var displayUrl = url.Length > (Console.BufferWidth - 10)
                ? $"{url.Substring(0, Console.BufferWidth - 10)}..."
                : url;
            Console.Write(displayUrl);

            HttpResponseMessage response;

            try
            {
                response = await _httpClient.GetAsync(url);
            }
            catch (HttpRequestException ex)
            {
                urls[url] = -1;

                Console.ForegroundColor = ConsoleColor.Red;
                Console.WriteLine($" [{ex.Message}]");
                Console.ResetColor();
                continue;
            }

            urls[url] = (int)response.StatusCode;

            if (response.IsSuccessStatusCode)
            {
                if (verbose)
                {
                    Console.ForegroundColor = ConsoleColor.Green;
                    Console.WriteLine($" [{(int)response.StatusCode}]");
                    Console.ResetColor();
                }
                else
                {
                    // Clear the current line
                    Console.SetCursorPosition(0, Console.CursorTop);
                    Console.Write(new string(' ', Console.BufferWidth - 1));
                    Console.SetCursorPosition(0, Console.CursorTop);
                }
            }
            else
            {
                // Write the error code and exit the loop early
                Console.ForegroundColor = ConsoleColor.Red;
                Console.WriteLine($" [{(int)response.StatusCode}]");
                Console.ResetColor();
                continue;
            }

            // Exit early if we are not recursing unless we are checking the starting URL
            if (!recurse && url != startingUrl)
                continue;

            // Exit early if the URL is external or if the content is not HTML
            if (!IsInternalUrl(url) || !response.Content.Headers.ContentType.MediaType.StartsWith("text/html"))
                continue;

            try
            {
                // If we made it this far, we will parse the HTML for links
                var html = await response.Content.ReadAsStringAsync();

                // We add each link to the dictionary with a status code of 0. Since
                // we are using a dictionary, URLs that are already checked or slated
                // to be checked will be ignored.
                GetUrlsFromHtml(html).ForEach(x => urls.TryAdd(x, 0));
            }
            catch
            {
                Console.WriteLine(url);
                Console.WriteLine($"\tUnable to parse HTML.");
            }
        }
    }
}

Some context on what is happening would be nice, right? This is the main method from a small side project I made called Linky. It’s a command line app to scan websites and attempt to identify broken links. I’ve left out some helper methods here so we can focus on the big one that could use some refactoring.

Let’s try to identify some issues before we start. First, there is up to 4 levels of nesting. That’s pretty bad and should probably be our main target. There’s also some boiler-plate code around outputting to the console that could be abstracted away. This won’t help our complexity score all that much, but would make the method a lot easier to read.

I calculate a Cognitive Complexity score of 31. SonarQube recommends that methods not exceed 15.

What changes can be made to reduce that score? I would first extract the big foreach loop contents as a new method. That will reduce the nesting of everything inside that loop by 2, which would be a huge win. Then I would extract the various console outputs to some new methods. In some cases this can extract some control flow breaks, further reducing nesting.

Let’s see how it all shapes up:

public async Task ParseAsync(string startingUrl, bool recurse = false, bool verbose = false)
{
    _startingUrl = CleanUrl(startingUrl);
    _rootUrl = new Uri(_startingUrl).GetLeftPart(UriPartial.Authority);
    _recurse = recurse;
    _verbose = verbose;

    var urls = new Dictionary<string, int>();
    urls.Add(_startingUrl, 0);

    while (urls.Any(x => x.Value == 0))
    {
        // Grab any URLs from the dictionary that we have not checked. If we are recursing,
        // new URLs will be added with a status code of 0 and we will pick them up on the
        // next pass.
        var urlsToProcess = urls.Where(x => x.Value == 0).Select(x => x.Key).ToList();

        foreach (var url in urlsToProcess)
            await ProcessUrlAsync(url, urls);
    }
}

private async Task ProcessUrlAsync(string url, Dictionary<string, int> urls)
{
    DisplayUrl(url);

    HttpResponseMessage response;

    try
    {
        response = await _httpClient.GetAsync(url);
    }
    catch (HttpRequestException ex)
    {
        urls[url] = -1;

        DisplayErrorCode(ex.Message);
        return;
    }

    urls[url] = (int)response.StatusCode;

    if (response.IsSuccessStatusCode)
        DisplaySuccessCode(urls[url]);
    else
    {
        // Write the error code and exit the loop early
        DisplayErrorCode(urls[url].ToString());
        return;
    }

    // Exit early if we are not recursing unless we are checking the starting URL
    if (!_recurse && url != _startingUrl)
        return;

    // Exit early if the URL is external or if the content is not HTML
    if (!IsInternalUrl(url) || !response.Content.Headers.ContentType.MediaType.StartsWith("text/html"))
        return;

    try
    {
        // If we made it this far, we will parse the HTML for links
        var html = await response.Content.ReadAsStringAsync();

        // We add each link to the dictionary with a status code of 0. Since
        // we are using a dictionary, URLs that are already checked or slated
        // to be checked will be ignored.
        GetUrlsFromHtml(html).ForEach(x => urls.TryAdd(x, 0));
    }
    catch
    {
        Console.WriteLine(url);
        Console.WriteLine($"\tUnable to parse HTML.");
    }
}

private void DisplayUrl(string url)
{
    var displayUrl = url.Length > (Console.BufferWidth - 10)
        ? $"{url.Substring(0, Console.BufferWidth - 10)}..."
        : url;

    Console.Write(displayUrl);
}

private void DisplaySuccessCode(int statusCode)
{
    if (_verbose)
    {
        Console.ForegroundColor = ConsoleColor.Green;
        Console.WriteLine($" [{statusCode}]");
        Console.ResetColor();
    }
    else
    {
        // Clear the current line
        Console.SetCursorPosition(0, Console.CursorTop);
        Console.Write(new string(' ', Console.BufferWidth - 1));
        Console.SetCursorPosition(0, Console.CursorTop);
    }
}

private void DisplayErrorCode(string errorCode)
{
    Console.ForegroundColor = ConsoleColor.Red;
    Console.WriteLine($" [{errorCode}]");
    Console.ResetColor();
}

How did we do?

  • The main method, ParseAsync now has a score of 3. 🎉
  • ProcessUrlAsync is now the biggest method, but since it’s not nested in two loops, the score is only 7. 🎉🎉
  • DisplayUrl and DisplaySuccessCode have a score of 1 each. DisplayErrorCode has a score of 0.
  • Some parameters are now assigned to private fields. This has no affect on the score, but keeps the code more readable since we don’t need to pass them around as much.

Functionally, the code is the same as it executes and outputs exactly as it did before. The only difference is how we structured what we are doing. We created smaller and more focused methods that can be more easily understood and thus more easily maintained. Cognitive Complexity helped us identify where the most complex parts of our method were so we could refactor where it is needed the most.

For more technical blogs, check out our blog on The IoC KataThe veryScary Story of How CMS Lost Its Heador browse our insights page to find what you're looking for. If you're looking for a technology partner to guide your project let's have a conversation. We have an experienced team of developers with skills ranging from firmware engineers and embedded programming to ML and AI development.