JS Tip: Simple Array Of Unique Items

A co-worker showed me a simple trick for getting an array of unique, simple values. Let's say we were combining two arrays of message id's:

view plain print about
1let arr = [19,22,7,12,6,85];
2let arr2 = [22,8,3,19,45];
3let newArr = [...arr, ...arr2];
4// newArr equals [19, 22, 7, 12, 6, 85, 22, 8, 3, 19, 45]

This gives us a new array, combining the values of the first two. But, we often only want the unique values. Rather than looping over every item, checking for dupes, etc, we can take advantage of the new Set object. A `Set` lets you store unique values of any type, and automatically tosses duplicates. As an iterable, it's easy to convert it from an Array like object to a true array.

view plain print about
1newArr = Array.from(new Set(newArr));
2// newArr now equals [19, 22, 7, 12, 6, 85, 8, 3, 45]

And, being an array of numerics, we'd likely want to sort it in numeric order. We can do this with Array.sort().

view plain print about
1newArr = Array.from(new Set(newArr)).sort();
2// Not exactly, now it reads [12, 19, 22, 3, 45, 6, 7, 8, 85]

OK, so that seems a little weird, until you read that documentation for `sort()` that I linked to above

The default sort order is built upon converting the elements into strings, then comparing their sequences of UTF-16 code units values.

Well, that seems a bit of a bummer. But, you can get around this by using the optional `compareFunction` argument of the `sort()` method.

view plain print about
1newArr = Array.from(new Set(newArr)).sort((a,b) => a - b);
2// That's better! Now it reads [3, 6, 7, 8, 12, 19, 22, 45, 85]

And there you have it. Simple, unique value array. The `Set` object allows for any type, so you could use complex objects as well, but again you would have to provide a custom `compareFunction` for handling the `sort()`.

Fun With Destructuring

If you aren't compiling your JavaScript (well, EcmaScript...) code with Babel, you're probably missing out on some of the best features of the evolving language. On the other hand, if you're working in React or Angular every day, you've probably come across some of the most dynamic features. Destructuring is a prime example, but it's also important to understand the "gotchas".

Let's take a simple example to show you some of the power of destructuring.

view plain print about
1const nodes = [{
2    id: '0-1',
3    label: 'Led Zeppelin',
4    members: [{
5        id: '0-1-1',
6        label: 'Jimmy Paige'
7    }, {
8        id: '0-1-2',
9        label: 'Robert Plant'
10    }, {
11        id: '0-1-3',
12        label: 'John Paul Jones'
13    }, {
14        id: '0-1-4',
15        label: 'John Bonham'
16    }]
17}];
18
19const band = nodes[0];
20const { label: bandName, members } = band;
21const [leadGuitar, leadSinger, bassPlayer, drummer] = members;

So, what's it do? Let's break it down. I took the first node and assigned it to band. I then assigned the bandName and members variables from the band's label and members values, respectively. Then, I took the first four items from my members array, and assigned each of them to a variable as well. This offers you a lot of power, simplifies your code, and can save some CPU cycles as well.

But, what happens if something doesn't exist? Say you had a band with no members? (That's a trick), or members but no drummer? In those cases the members or drummer variables would be undefined.

Now, let's talk about "gotchas". Here's a neat bit of syntactic sugar for you.

view plain print about
1drummer = {...drummer, deceased: true};

Using the spread operator, with destructuring, we add a new key to the drummer object. But, wait...

We also replaced the drummer object. This is important. While using destructuring like this can be easy, and very effective, it can have consequences. If you needed to update drummer by reference, you just killed the reference assignment.

And, the above statement would error (as will the array example below). This is because we declared drummer (and members) using const. While we could adjust, add, or remove keys and values, we can't replace the variable. We would have to declare using let instead of const.

The same holds true when using a spread operator and destructuring when attempting to update an array.

view plain print about
1members = [...members, { id: '0-1-5', label: 'Jason Bonham' }];

While the members array now has a fifth item, the reference to band.members is no longer valid, as you replaced the variable.

But, this is no big deal, unless you needed to update the reference to the original variable. As long as you're aware of this limitation, it's easy to fallback on other methods to update those references. Let's change our variable declarations a little bit, and retool this code to work for us.

view plain print about
1const band = nodes[0];
2const { label: bandName, members } = band;
3let [leadGuitar, leadSinger, bassPlayer, drummer] = members;
4
5Object.assign(drummer, {deceased: true});
6members.splice(3, 0, {id: '0-1-5', label: 'Jason Bonham'}); // insert Jason in the drummer array position
7[,,,drummer] = members; // and update the declaration

We switched our member variable declarations to let, so they can be replaced, updated the drummer, inserted a new member in the correct position, and updated the drummer reference to the new member.

This post only briefly touches on the power of destructuring, in modern EcmaScript. For a fantastic overview, check out the MDN documentation.

A Note On Environment Variables With Docker

I mentioned in a previous post the three different methods for defining Environment variables for our Docker environment, but I hit a small bit I didn't immediately realize.

You cannot reference those variables directly in your Dockerfile during setup. You can create new Environment Variables in your Dockerfile (hey, method 4), but you can't access those externally defined variable in your Dockerfile process.

Here's the deal. When you run `docker-compose build` is creating the layers of your stack, but not firing off your entrypoints, which is where the meat of your processes are, and the bits that do access the Environment Variables. So, what if, in your Dockerfile, you wanted to define your server timezone. We set a timezone environment variable in a previous post. How can we then pass that to Dockerfile for the `build`?

Arguments. I can define a build argument in my Docker Compose file, and then reference that from Dockerfile during `build`. Improving on that further, I can dynamically set that Argument, in the Docker Compose file, using the Environment Variable I already set. Let's look at a small section of the Docker Compose file, where I define my Argument.

docker-compose.yml

version: "3.3"
services:
    lucee:
        build:
            context: ./lucee
            args:
                - TZ=${TZ}
...

I won't talk about the other bits, but you can see the args section under build where I've defined TZ, and tied it to the Environment Variable we had previously setup with the same name.

Now let's look at how you use the Argument in your Dockerfile.

Dockerfile

...
ARG TZ

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone ...

Now that last line might be different (for setting system timezone) depending on your environment, but this shows you how to properly access the variable in your `build`.

Analyzing Our Docker Compose Infrastructure Requirements

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

So, before we continue I think it's important to layout some of the next steps in what it is I wanted/needed to accomplish. I'm using Docker Compose to define my infrastructure. I started with the database, as that will be used for multiple sites, so that was a no brainer.

But what's next? Well, first let me look at some of my requirements.

  • Database (check)
  • ColdFusion Rendering Engine (for this blog) [Lucee]
  • Multi Context CF Setup (blog and os project sites/pages)
  • Web Server
  • New Photography Site (?)
  • Secure Sites with SSL for Google
  • Auto Backup to S3 (?)

Yeah, I set some stretch goals in there too. But, it's what I wanted, so I got to work.

In my initial implementation on Digital Ocean I used the default lucee4-nginx container. Nginx is a nice, easily configurable web server. And, it worked great for a year, up until Digital Ocean restarted my Droplet while running some necessary security maintenance on their infrastructure. Suddenly, nothing worked.

Whoops.

OK, so this was the first thing I had to figure out. Turned out to be a relatively easy thing to track down. I was using the "latest" container. Lucee updated the lucee4-nginx container version of Tomcat. There were changes to the container's internal pathing that no longer jived with the various settings files I had, so I just had to resolve the pathing issues to get it all straight. I also took the opportunity to go ahead and switch to Lucee 5.2.

Now I was back up and running on my initial configuration, but (as you can see in the list above) I had some new goals I wanted to accomplish. So I sat down and started looking over my other requirements to figure out exactly what I needed. One of the first things I looked into was the SSL certs. I could buy expensive wildcard domain certs, but this is a blog. It creates no direct income. Luckily there's LetsEncrypt. LetsEncrypt is a great little project working to secure the internet, creating a free, automated and open Certificate Authority to distribute, configure and manage SSL certs.

Long story short, my investigation of all of my requirements made me realize that I needed to decouple Lucee from Nginx, putting each in it's own separate container. I'm going to use Nginx as a reverse proxy to multiple containers/services, so decoupling makes the most sense. I'm still keeping things small, because this is all out of pocket, but one of the advantages of Docker Compose is I can define multiple small containers, each handling it's own defined responsibility. In the end it comes down to this.

In the end our containers will look something like this:

  • MariaDb (check)
  • Lucee 5.2 (3 sites)
  • Other (photo site, possibly Ghost)
  • Nginx
  • Docker-Gen (template generator, dependency for...)
  • LetsEncrypt
  • Backup (TBD)

Everyone's configuration changes over time, and this is what I came up with after my latest analysis of my requirements. I've already gone through multiple rounds of attacking each different requirement, and probably haven't finalized yet, but next post we'll step in again and setup our Nginx container and start some configuration.

Adding a MariaDB Database Container

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

Building on our last post, we're going to continue our step-by-step setup be talking more about the database setup. I had decided to use MariaDB for my database. For anyone unfamiliar, MariaDB was a fork of MySQL created by many of MySQL's core development team when Oracle bought MySQL, to maintain an open source alternative. Since this blog was using a MySQL database on the shared hosting platform, I needed something I could now use in our DigitalOcean Droplet.

In that last post I showed you the beginnings of our Docker Compose configuration.

version: "3.3"
services:
  database:
    container_name: mydb
    image: mariadb:latest
    env_file:
      - mariadb.env
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
    networks:
      my-network:
        aliases:
          - mysql
          - mydb
    restart: always

networks: my-network:

I explained the basics of this in the last post, but now let me go into some more depth on the finer points of the MariaDB container itself. First, most of the magic comes by using Environment variables. There are three different ways of handling setting environment variables with Docker Compose. First, you can define environment variables in a .env file at the root of your directory, with variables that would apply to all of your containers. Secondly, you can create specific environment variable files (in this case the mariadb.env file) that you can attach to containers using the env_file configuration attribute, like we did above. And a third way is to add environment variables to a specific container using the environment configuration attribute on a service.

Why so many different ways to do the same thing? Use cases. The .env method is for variables shared across all environments. The env_file method can take multiple files, where you may need to define variables for more than one container and share them to another, but not all, and the environment method is just on that one container. There may even be instances where you use all three methods.

In that vein, let's look at a possible use case for a "global" environment variable. I want to use the same timezone in all of my containers. In my .env file I put the following:

view plain print about
1TIMEZONE=America/Chicago
2TZ=America/Chicago

I applied the same value to two separate keys, because some prebuilt containers look for it one way while others look for it another, but this is a perfect example of a "global" environment variable.

Now we can look at environment variables that are specific to our MariaDB container. Here's where things can get tricky. Some prebuilt containers are fairly well documented, some have no documentation at all, and most lie somewhere in between. The MariaDB container documentation is pretty good, but sometimes you have to dig in to get everything you need. Let's step in.

First, I needed MariaDB to setup the service. To do this right, you have to define the password for the root user. This is something that can go in your container specific environment variables, or the container specific environment variable file.

mariadb.env

view plain print about
1MYSQL_ROOT_PASSWORD=mydbrootuserpw

While this will get the service up and running, it's not enough. I needed by blog database automatically setup by the build, as well as the user that my blog would use to access the database. Luckily, the prebuilt MariaDB container makes this pretty easy as well.

mariadb.env

view plain print about
1MYSQL_DATABASE=databaseiwantmade
2MYSQL_USER=userofthatdb
3MYSQL_PASSWORD=passwordofthatuser

Boom! Without any extra code I created my database and the user I needed. But...

This was just the first step. I now have the service, the database, and the user, but no data. How would I preseed my blog data without manual intervention? Turns out that was fairly simple as well. Though it's barely glossed over in the container documentation, you can provide scripts to fill your database, and more. Remember these lines from the Docker Compose service definition?

  ...
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
  ...

I was binding a local directory to a specific directory in the container. I can place any .sql or .sh file in that directory that I want, and the container will automatically run them in alphabetical order during the start up of the container.

OK. Backup. What? So, the container documentation says you can do this, but it doesn't really tell you how, or go into any kind of depth. So, I went and looked at that containers Dockerfile and found the following near the end:

view plain print about
1ENTRYPOINT ["docker-entrypoint.sh"]

This is a Docker command that says "when you start up, and finish all the setup above me, go ahead and run this script." And, that script is in the GitHub repo for the MariaDB container as well. There's a lot of steps there as it sets up the service, and creates that base database and user for you, and then there's this bit of magic:

docker-entrypoint.sh

view plain print about
1for f in /docker-entrypoint-initdb.d/*; do
2 case "$f" in
3 *.sh) echo "$0: running $f"; . "$f" ;;
4 *.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
5 *.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
6 *) echo "$0: ignoring $f" ;;
7 esac
8 echo
9done

The secret sauce. Now, I don't do a ton of shell scripting, but I am a linguist who's been programming a long time, so I know this is a loop that runs files. It runs shell files, it runs the sql scripts, it'll even run sql scripts that have been zipped up gzip style. Hot Dog!

So, what it tells me is that the files it will automatically process need to be located in a directory /docker-entrypoint-initdb.d, which you see I mapped to a local directory in my Docker Compose service configuration. To try this out, I took my blogcfc.sql file, dropped it into my local sqlscripts mapped directory, and started things up. I was then able to use the command line to log into my container and mysqlshow to verify that not only was the database setup, but that it was loaded with data as well.

But, it gets better. I needed a database for my Examples domain as well. This required another database, another user, and data. Now, I like to keep the .sql script for data, and use a .sh file for setting up the db, user and permissions. I also wanted to put needed details in my mariadb.env file that I'll probably need in another (Lucee) container later.

mariadb.env

view plain print about
1...
2EXAMPLES_DATABASE=dbname
3EXAMPLES_USER=dbuser
4EXAMPLES_PASSWORD=userpw
5...

Then, I created my shell script for setting up the Examples database, and dropped it into that sqlscripts directory.

examples-setup.sh

view plain print about
1#!bin/bash
2
3mysql -uroot -p"${MYSQL_ROOT_PASSWORD}"<<MYSQL_SCRIPT
4CREATE DATABASE IF NOT EXISTS $EXAMPLES_DATABASE;
5CREATE USER '$EXAMPLES_USER'@'%' IDENTIFIED BY '$EXAMPLES_PASSWORD';
6GRANT ALL PRIVILEGES ON $EXAMPLES_DATABASE.* TO '$EXAMPLES_USER'@'%';
7FLUSH PRIVILEGES;
8MYSQL_SCRIPT
9
10echo "$EXAMPLES_DATABASE created"
11echo "$EXAMPLES_USER given permissions"

Drop in an accompanying .sql script to the same directory, to populate the database (remember that all these scripts are run in alphabetical order), and now I have a database service to fulfill my needs. Multiple databases, multiple users, pre-seeded data, we have the whole shebang.

By the way, remember this?

.env

view plain print about
1TIMEZONE=America/Chicago
2TZ=America/Chicago

The MariaDB container took that second variable (TZ) and automatically set the service's timezone for us as well. Snap!

This post covered our first container, in our Docker Compose setup. Next post we'll continue our journey to setup a full environment.

Getting Started With Docker Compose

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

As I mentioned in the last post, it was time to change hosting and I decided to go with DigitalOcean. But first, I had to figure out how to get all of my infrastructure deployed easily. DigitalOcean supports Docker, and I knew I could setup multiple containers easily using Docker Compose. I just had to decide on infrastructure.

Docker Compose allows one to script the setup of multiple containers, tying in all the necessary resources. There are thousands of prebuilt containers available on Docker Hub to choose from, or you can create your own. I knew I was going to have to customize most of my containers, so I chose to create my own, extending some existing containers. To begin with, I knew that I had three core requirements.

  • Lucee - Open Source CFML Engine
  • NGINX - Open Source Web Server/Application Platform
  • MariaDB - Open Source Database Server

Now, I could've used a combined Lucee/NGINX container (Lucee has one of those built already), but I knew that I would use NGINX for other things in the future as well, so thought it best to separate the two.

When setting up my environment, I stepped in piece by piece. I'm going to layout each container in separate posts (as each had it's own hurdles), but here I'll give you some basics. You define your environment in a docker-compose.yml file. Spacing is extremely important in these files, so if you have an issue bringing up your environment spacing will be one of the first things you want to check. Here I'll show a simple configuration for a database server.

version: "3.3"
services:
  database:
    container_name: mydb
    image: mariadb:latest
    env_file:
      - mariadb.env
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
    networks:
      my-network:
        aliases:
          - mysql
          - mydb
    restart: always

networks: my-network:

Here I've defined a network called my-network, and on that network I have a database service in a container called mydb. That container is aliased on the network as mydb and mysql. An alias is a name this container will be called when referenced by other containers. I bound a local folder (sqlscripts) to a folder in the container (docker-entrypoint-initdb.d). I also included a local file that contains the Environment Variables used by the container. This container used the actual mariadb image, but you could easily replace this line to point it to a directory with it's own Dockerfile defining your container (i.e. change 'image: mariadb:latest' to 'build: ./myimagefolder').

Bringing up your containers is simple. First you build your work, then you bring it up. From a terminal prompt:

view plain print about
1> docker-compose build
2> docker-compose up

You can add '-d' to that last command to skip all of the terminal output and drop you at a prompt, but sometimes it's good to see what's happening. To stop it all (when not doing '-d') just do Ctrl-C, otherwise just use 'docker-compose stop' or 'docker-compose down'. Going forward it will probably help to review the Docker Compose Command Line Reference

The Docker Compose File Reference is very extensive, providing a ton of options to work with. Here I'm using the 3.3 version of the file, and it's important to know which one you're using when you look at examples on the web, as options change or become deprecated from version to version.

That's a start to a basic Docker Compose setup. Continuing in the series we'll go over each container individually, and see how our Compose config ties it all together. Until next time...

Adventures in Docker Land

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose.

For many years Full City Hosting hosted my blog for free. Emmet is a great guy, and they had shared CF servers, so it wasn't a big deal.

Fast forward a decade plus, two books, tons of traffic... Hey. And, FC phased out their shared CF servers, and moved to the cloud. Time to move. (For the record, Emmet is still a great guy.)

The first thing to decide was "Where do I host this?" There's a few moving parts here (CF, MySQL, DNS, multiple domains, etc). And there are costs to consider. And learning curve. Every enterprise app I'd supported had been on a Windows Server, and that wasn't going to happen with my blog and examples apps on a budget.

Emmet suggested DigitalOcean. I could host multiple small containers on a small Droplet for about $10 a month. This should be enough to give me exactly what I need to run my few, small domains.

Step 2: Figure out the required infrastructure. Deployment to DigitalOcean is simple with Docker. I could create containers for my web server, my app server, my database, etc. But Adobe Coldfusion costs money, and while I had a license for CF X, Adobe Coldfusion isn't really conducive to containerized deployment either.

Enter Lucee, an open source CFML app server. Not only is it free, they even had prebuilt Docker containers with documentation on how to configure. Couple this with NGINX and MariaDB, and we're cookin' with Crisco.

So, I'm gonna cover how I did all of this, step by step. I found a lot of little traps along the way, but it's been a ride I'll share with you all here. Kick back, strap in, and let me know where I zigged when I should've zagged.

ES2015 and Fun With Parameters

If you've come to JavaScript after learning to program in other languages, one thing that's probably stuck in your craw over the years has been the lack of any way to define default parameters in functions. You've probably written something like this in the past:

view plain print about
1var foo = function (bar) {
2        bar = bar || 'ok';
3        // ...
4    };

For the most part that sorta thing probably worked out, until your argument was a boolean, which then really complicated things.

With ES2015, the heavens have opened and prayers have been answered, as we've finally been given the ability to define default parameters. Consider the following:

view plain print about
1// This is in a class
2    foo (bar=true) {
3        if (bar) {
4            console.log('First round is on Simon!');
5        } else {
6            console.log('No drinks today!');
7        }
8    }

Simple. We're saying that if bar is true, then Simon is buying, otherwise we're out of luck, and defaulting our argument to true. We can then call this method a few times to test it out:

view plain print about
1constructor () {
2        this.foo(); // 'First round is on Simon!'
3        this.foo(false); // 'No drinks today'
4        let bar;
5        this.foo(bar); // 'First round is on Simon!'
6    }

You can see from my comments what the output of those methods would be. It's important to note here that even the undefined value passed as an argument triggered the default argument value as well.

Hopefully default arguments will help you to significantly simplify your code in the future. As usual, if you have any feedback/praise/complaints/war stories, feel free to comment below, or drop me a private message through the "contact" link on the page.

ES2015, Promises, and Fun With Scope

I've been using Promises for some time now. JQuery has acted as a shim for some time, and several other libraries have been around as well. ES2015 includes Promises natively. If you're unfamiliar with Promises, I strongly suggest you read this great post by Dave Atchley.

Like I said though, I've been using Promises for a while now. So, when I started moving to ES2015 it was a bit of a kick in the pants to find issues with implementing my Promises. Let me give you an example of how something might've been written before:

view plain print about
1'use strict';
2    
3    module.exports = [
4        '$scope', 'orders', 'dataService',
5        function ($scope, orders, dataService) {
6            var self = this;
7            
8            self.orders = orders;
9            
10            self.addOrder = function (order) {
11                // ... do some stuff
12                // get original
13                dataService.get(order.id)
14                    .then(self._updateOrders)
15                    .catch(function (error) {
16                        // do something with the error
17                    });
18            };
19            
20            // faux private function, applied to 'this' for unit testing
21            self._updateOrders = function (order) {
22                // ... some process got our order index from orders, then...
23                orders[index] = $.extend(true, orders[index], order);
24            };
25        }
26    ];

Seems pretty straightforward, right? addOrder() gets called, which does some stuff and then retrieves an order from the service. When the service returns the order, that's passed to the _updateOrders() method, where it finds the correct item in the array and updates it (I know, it's flawed, but this is just an example to show the real problem).

So, what's the problem? That works great. Has for months (or even years). Why am I writing this post? Fair question. Let's take a look at refactoring this controller into an ES2015 class. Our first pass might look like this:

view plain print about
1'use strict';
2    
3    class MyController {
4        constructor ($scope, orders, dataService) {
5            this.$scope = $scope;
6            const myOrders = [];
7            this.orders = myOrders.push(orders);
8            this._dataService = dataService;
9        }
10        
11        addOrder (order) {
12            // ... do some stuff
13            // get original
14            this._dataService.get(order.id)
15                .then(this._updateOrders)
16                .catch(function (error) {
17                    // do something with the error
18                });
19        }
20        
21        _updateOrders (order) {
22            // ... some process got our order index from orders, then...
23            this.orders[index] = $.extend(true, this.orders[index], order);
24        }
25    }
26    
27    MyController.$inject = ['$scope', 'orders', 'dataService'];
28    
29    export {MyController};

That looks good, right? Well....

When MyController.addOrder() gets called, with this code, the get() method is called on the service, and... BOOM! Error. It says there is no _updateOrders() on this. What? What happened?

Well, it's not on your scope. Why? Because ES2015 has changed how scope is handled, especially within the context of a class. "this" is not the same in the context of the Promise's then(), at this point. But then, how are you supposed to reference other methods of your class?

Bom! Bom! BAAAAHHHHH! Use an arrow function. "Wait? What?" (lot's of confusion today) That's right, an arrow function. From MDN:

An arrow function expression (also known as fat arrow function) has a shorter syntax compared to function expressions and lexically bind the this value (does not bind its own this, arguments, super, or new.target). Arrow functions are always anonymous.

If you aren't still confused at this point you are a rockstar. Basically what it says is that this will become of the context from which you're using the arrow function. So, in terms of a Promise, if we change our addOrder() method like this:

view plain print about
1addOrder (order) {
2        // ... do some stuff
3        // get original
4        this._dataService.get(order.id)
5            .then((order) =>
this._updateOrders(order))
6            .catch(function (error) {
7                // do something with the error
8            });
9    }

This then fixes our this scoping problem within our then. Now, I know this isn't much in the way of an explanation into "How" it fixes it (other than setting the right this), and I know I'm not explaining what an arrow function is either. Hopefully this is enough to stop you from banging your head against the wall anymore, provides a solution, and gives you some clues on what to search for in additional information.

So, as always I welcome your feedback/suggestions/criticisms. Feel free to add a comment below, or drop me a direct line through the "contact" links on this page.

Death to Var - Why Let and Const Really Interest Me In JavaScript

Today I want to talk about the value of ES2015's new let and const variable declarations, and give you some use case scenarios. But first, let me tell you why I was really looking at all.

Ben Nadel is one of my favorite people. You will not, ever, meet a nicer guy. Ben is the kind of guy that if the two of you were walking down the street in a blizzard, and you were cold, he'd give you the shirt off of his own back and go topless so you wouldn't freeze. Yes, he really is that nice of a guy.

I'd like to say that I've learned many things from Ben over the years. He blogs about everything as he learns it, sharing what he finds along the way. And he's the first to tell you that he's not always right. Sometimes the comments to his posts are even more informative than the posts themselves. And, sometimes, he gives his opinion on a matter of programming and that opinion might not always follow best practice.

About a week ago, Ben posted an article titled Var For Life - Why Let And Const Don't Interest Me In JavaScript. He's very clear, in his post, saying that his article is an opinion piece. His thoughts are clear, his examples make sense, and it's easy to see where he's coming from. You'll also find some really thought provoking discussion in the comment thread both for and against.

But I think it's important to truly explore these new constructs in JavaScript. They were introduced with one true goal in mind: to help manage memory in our applications. With the proliferation of JavaScript based applications, both client-side and server-side, the need to carefully analyze our architecture has increased a dozen fold. How you manage your variable declarations will directly impact your overall memory utilization, as well as assist you in preventing race conditions within your app. The let and const declarations really fine tune that control.

The let declaration construct is fairly straightforward. It is a block level scoping mechanism, supplanting var usage in most situations, and controls the "this" level access of those variables. The var declaration construct was a function level scope. What's the difference between block level scoping and function level scoping? Consider the following:

view plain print about
1for (var i = 0; i < 10; i++) {
2        console.log('i = ', i);
3    }
4    console.log('now we are outside of our block. i = ', i); // i now equals 10

Function level scoping means that variables declared using the var construct are available only within the confines of that function, but are not restricted to the block they are declared within. Running the above example shows you that i still exists outside of the for loop block. What happens though if we change that declaration to a block level declaration?

view plain print about
1for (let i = 0; i < 10; i++) {
2        console.log('i = ', i);
3    }
4    console.log('now we are outside of our block. i = ', i); // throws an error that i doesn't exist

In the case above, the variable i is now a block scoped variable and, as such, is only available within the confines of the for loop. The variable is cleared from memory once execution is complete (since there are no references created to those variables in the block), and their values are not available outside of the block, reducing the opportunity for race conditions.

Probably the most misunderstood of these constructs is the const form of variable declaration. Most still think of this as setting an immutable constant, but that's not entirely correct. Let me give you an example:

view plain print about
1const myVar = 'JavaScript is really ECMAScript';
2    console.log(myVar.replace('really ', ''));
3    myVar = 'Purple Haze'; // This throws an error, because you can't do this

OK, that example supports that whole "immutable constant" kinda thing. But that isn't the whole story. Let's look at another example:

view plain print about
1const myVar = {};
2    myVar.foo = 'bar';
3    console.log(myVar.foo);
4    myVar = {}; // You were just fine til you got to this line

"Wait? What?" Yes, you can change a variable declared with const. Sorta.

When you set a variable with const, you are assigning a variable to a specific location in memory. It is set to the type you initially assign. You can adjust properties of that variable, but you can not replace the variable, even with one of the same type. This is why examples with a simple type (string or numeric or boolean) would throw an error, but you could create and remove and adjust object keys or array elements all day long. The variable itself isn't constant, it's location in memory is.

Which allows me to change an example from a previous post. In that post, I talked about using implicit ES2015 getters and setters, and showed an example of broadcasting a variable change in a service from within a custom setter method. I had a variable in my Controller that was not passed in to the Service by reference, so any time I changed the Service variable it had to broadcast that change to my Controller so I could update the controller level variable. In my original example, the variable was originally assigned to the class' "this" scope. But with const I can assign that variable and hold it's location in memory, thereby passing the memory reference and changing how I can control workflow.

view plain print about
1'use strict';
2    
3    class MyController {
4        constructor ($scope, dataService, orderService) {
5            this.$scope = $scope;
6            this._dataSvc = dataService;
7            this._orderSvc = orderService;
8            
9            const myCrazyVar = {};
10            // setting to 'this' too, for controller public accessable reference
11            dataService.myCrazyVar = this.myCrazyVar = myCrazyVar;
12        }
13    }
14    
15    myController.$inject = ['$scope', 'dataService', 'orderService'];
16    
17    export {MyController};

view plain print about
1'use strict';
2    
3    class DataService {
4        constructor () {
5            this.myCrazyVar = null;
6        }
7    }
8    
9    export {DataService};

view plain print about
1'use strict';
2    
3    class OrderService {
4        constructor (dataService) {
5            this._dataSvc = dataService;
6        }
7        
8        add (order) {
9            // update our shared data
10            this._dataSvc.myCrazyVar.orderid = order.id;
11        }
12    }
13    
14    orderService.$inject = ['dataService'];
15    
16    export {OrderService};

Is this wise? I'm sure if you aren't careful you can create issues. But, by passing that memory reference around you also eliminate the need to duplicate variables and broadcast events unnecessarily, reducing your memory footprint and cpu utilization.

Learning when to use let and when to use const will take some time for many who've worked with JavaScript for any length of time. I'm sure this will be one of those new features that takes some significant time to gain true traction among developers. In the end run, it will force us all to think ahead about the architecture of our applications in advance (always a good thing), and the impact of our code on performance.

Now, if I can just convince Ben ;)

More Entries