Cristian Sanchez

Introduction to Transducers (in JavaScript)

A few months back the Clojure team (Rich Hickey et. al) iterated on Clojure’s collection processing utilities by formalizing the concept of a Transducer (which is a further generalization of reducers). The essence of transducers is that they let us define composable transformations that can be applied to different types of things completely agnostic of how those things are implemented. Less vaguely, this concept lets us define operations like map, filter, flatMap etc. without caring about the implementation details of the container type of neither the source nor the output. This means we can define these transformations once, compose them, and conceivably apply them to lists, iterables, observables, channels or any type that emits or produce values.

Transducers accomplish this through the fact that reduce (fold) is a commonly implemented operation on collection types. Furthermore, many operations can provably be expressed in terms of fold either directly or indirectly. Thus, reducing functions (the type of function you might pass to reduce) are a highly applicable building block that can be used to express many kinds of computations. With this in mind – the utility of transducers is that they operate on and produce reducing functions. Additionally, they can be composed using ordinary function composition.


Parallel Processing in the Browser with Web Workers, ES6 Generators, and Promises

One of the coolest new features coming in ES6 Harmony is support for iterators and generators (learn more). If you aren’t familiar with the concept, generators create iterable sequences of elements that aren’t evaluated until they are iterated through. Generator functions, through the use the yield operator, are similar to coroutines in that they can pause execution and yield control to another part of the program.

A very simple example of a generator in action is:

var countToThree = function* () {
    yield 1;
    yield 2;
    yield 3;

var iterator = countToThree();; // 1; // 2; // 3

Another important feature of generators is that a caller can also send values back into the to-be-resumed generator function. With this in mind, it was soon realized that when used in conjunction with Promises (representations of “asynchronous values,” standardized in the Promises/A spec), generators could be used to mitigate the “callback hell” that comes as a result of JavaScript’s single threaded nature. The result is vastly cleaner and more straight forward code.

Generators will allow for code like:

// imagine that 'do' is a function that automatically resumes execution of a generator
// once a promised value has been resolved.
do(function* () {
    var userId = yield $.get(findUserByNameUrl, { name: "John Doe" });
    var birthDate = yield $.get(getBirthDateByUserUrl, { userId: userId });


Over today’s:

// We could also use the "promises" that jQuery returns in "today's" version, but
// either way, it's not as clean as the generator + promises version.
$.get(findUserByNameUrl, { name: "John Doe", function (userId) {
    $.get(getBirthDateByUserUrl, { userId: userId }, function (birthDate) {

This is great! I really can’t wait until more browsers start implementing this feature (Node.JS users can get this now by enabling ES6 mode).

But alas, this has already been covered much in depth on several other blogs. There are also many libraries out there that support this. Task.js has been around since October 2010, faithfully waiting until the day that generators are consistently supported in the major browsers. Q.js, which is one of the most fully featured implementations of the Promises/A spec also supports this through its Q.async and Q.spawn utility methods.

At any rate, I think this is a pretty cool concept. How far can we push it?

Combining Web Workers with Generators + Promises

Late last summer, I wrote a small utility/proof of concept called SimpleWorker (@GitHub)that allows for easy, inline definitions of Web Workers. Additionally, it uses Promises to represent the asynchronous results of the computations from the web worker threads.

That said, it should now be apparent that this should allow us to easily delegate computationally expensive work out to web workers.

First, let’s define a naive recursive implementation of the Fibonacci series using SimpleWorker. If this were run in the main thread, it would most definitely lock up the UI as it is CPU bound.

var fibonacci = SimpleWorker(function (n) {
    var fib = function (n) {
        if (n === 0 || n == 1) {
            return n;
        return fib(n-1) + fib(n-2);
    return fib(n);

Now, let’s use it with Q.spawn:

$('form').submit(function () {
    var $form = $(this),
        $input = $("#input"),
        $output = $("#output");
    Q.spawn(function () {
        var n = parseInt($input.val(), 10),
        result = yield fibonacci(n);
    return false;

You can try it out for yourself on this jsfiddle (Firefox only). If you enter a sufficiently large number (n=45+ on my laptop) your CPU will start churning away at the result – but notice that the UI is still fully responsive and animations continue to run smoothly while it’s doing so.

This is a pretty simple example, but it just goes to show once again how absolutely versatile Promises can be in JavaScript. And when paired along with generators, they are only that much easier to use.

At the time of this post, Firefox is the only browser that has support for generators. Chrome Canary also has support for generators, but the setting must be enabled in about:flags explicitly.

Software Developers & The Golden Mean

Usually when I tell people that I’m a web developer, those that are somewhat more informed will typically ask “frontend or backend?” (and if they aren’t it’s usually more along the line of “oh.. like web sites and stuff?”). After a brief pause, I’ll usually say “both” because I don’t like pigeon-holing myself into one category. But it’s hard to deny that there’s increasing pressure to specialize, especially with the “Web Frontend Renaissance” of recent years.. the maturation of the ecosystem e.g. the influx of new frameworks, libraries, and tooling, paired with increased cooperation among the browser vendors. In other words, a whole lot of new stuff to learn.

In ancient philosophy, the principle of moderation was a common thought throughout different cultures – from Greek philosophy in the West, to Confucianism and Buddhism in the East. One of the most well known proponents of this principle was Aristotle, the Greek philosopher and polymath. Aristotle espoused the principle of the Golden Mean, which Wikipedia introduces as:

In philosophy, especially that of Aristotle, the golden mean is the desirable middle between two extremes, one of excess and the other of deficiency. For example, in the Aristotelian view, courage, a virtue, if taken to excess would manifest as recklessness and if deficient as cowardice.

Aristotle wrote about the Golden Mean in the context of virtues and what a virtuous person should be like. That’s all fine and dandy, but we have more important things to consider. In the tradition of taking a concept in one area and applying it to a totally different area, are we able to use this principle to decide whether to specialize as a software developer?

Aristotle’s Ideal Software Developer

The obvious application of Aristotle’s principle would yield that no, software developers shouldn’t specialize any further. They should lie somewhere between backend developer and frontend developer. You may be thinking to yourself, “how am I supposed to be both – am I expected to keep up on the latest frontend technologies while simultaneously staying on my backend game?” After all, Aristotle believed that we should be balanced individuals on the whole – how are you supposed to keep up with society, culture, fitness, etc. when you need to spend all your time reading up on the Frontend Framework/Library of the Month and the newest Cloud and NoSQL solutions?

Clearly you shouldn’t. You should apply the principle and intelligently moderate and balance your learning and experience between these two realms of application development. What this means in practice is that you should:

Maintain and build up your fundamental knowledge in both areas

Rarely does a technology come along that is so beyond what is currently out there that it baffles technologists. Keep your fundamentals strong and you can always scale your knowledge up and out if need be.

Keep an ear to the ground, but don’t react too quickly

You should always be aware of your surroundings. Stay up to date on what’s out there, but don’t jump at every technology that crosses your way for the sake of it. Chances are, if it’s worth learning it will stick around through technological selection. The exception to this is if you find something that absolutely piques your interest. Then it’s okay :) .

Constantly evaluate your position on the equilibrium

Always be aware of where you are on the equilibrium. There’s absolutely nothing wrong with leaning one way or the other. But always evaluate where you are so you don’t stray too far in one direction.

The Middle Isn’t Always Right

So we’re able to maintain a balance if we’re somewhat methodical and disciplined. But why should we? Sometimes balance and moderation makes us feel all warm and fuzzy inside – but logically, moderation isn’t always the correct position.

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle

Is it professionally advantageous to take the central position? Maybe not.

A few years ago I gave my dad a Leatherman Multi-tool (think Swiss Army Knife on steroids) as a gift. Yet to this day, whenever he needs to tinker around with something, he doesn’t take out his multi-tool. Instead, he goes to the garage and finds the right specialized tool for the job. As a result, the multi-tool doesn’t get much usage if at all.

Could it be that employers are similarly seeking out more specialized employees? Possibly. I haven’t done the analysis to say whether or not that could be.

Software Is About The Big Picture

But software development isn’t purely about the skill set. It’s about comprehension and analysis – being a full stack developer affords you the knowledge to be able to comprehend the application at different levels of the stack and at different stages of the business process. As a full stack developer, regardless of which level you’re currently working at, it could be argued that having that application-wide comprehension allows you to make better localized decisions. Ultimately, I believe companies are looking for developers with the critical thinking skills to be able to decompose and find solutions for problems using their breadth of knowledge and experience. Having a few desirable skills might get your foot in the door, but in the long run you’ll need much more than that.

It’s A Balancing Act

In the end, Aristotle realized that being a truly balanced and virtuous person was actually really hard (so much so that he thought a certain type of upbringing was required). Likewise, being a full stack developer isn’t easy – it requires a lot dedication, awareness, and foresight. It’s a tough path to take but it offers a massive amount of professional and entrepreneurial potential. As for me, I’m going to continue on the full-stack developer track. I still have a ways to go, but I think it’s the right direction.

If there’s any confusion, I’m speaking in the context of web application development. Though I believe this principle / dilemma is generally applicable to all types of platforms. If you’re wondering exactly what I mean by “full stack developer,” Laurence Gellert has what I think is a good definition.

SimpleWorker – Inline Web Workers + Promises

This weekend I decided to write a simple utility to make working with WebWorkers even easier: SimpleWorker.js is a utility for defining web worker tasks inline. It lets you write a function which is executed in the context of a new Web Worker. It uses a Promise (powered by Q) to represent the result of the asynchronous background computation.

A simple example is:

var add = SimpleWorker(function (a, b) {
  return a + b;

var promise = add(1, 2);

promise.then(function (result) {
  alert(result); // 3

SimpleWorkers embraces promises throughout. If the result within your web worker is also asynchronous, you can return a promise-like object from your worker and it will automatically post back the result to the main thread.

var add = SimpleWorker(function (a, b) {
  var deferred = Q.defer();

  setTimeout(function () {
    deferred.resolve(a + b);
  }, 3000);

  return deferred.promise;
}, ['']);

This particular example loads Q.js to provide promises from within the worker. In general, you can use this to take advantage of other third party libraries that use promises. This always isn’t realistic – if you need to return a result asynchronously, you can do so by calling the postResult(result) function from within your worker.
The Details

SimpleWorker tasks are executed within the context of a new WebWorker. This has implications and you should use the following guidelines to decide whether you should use it:

  • Your task function cannot access any resources from your main thread. Any references to variables from the containing scope will be unresolved and throw an error.
  • There is an overhead cost associated with spinning up a new WebWorker. Use them only when you have a task which is computationally expensive, has the potential to block the UI thread, and which you are able to isolate.
  • You can pass arguments into your worker and get back a result. However, due to the limitation of the “Structured Cloning” algorithm which implements serialization in message passing between Web Workers and the main thread, you cannot pass in or return back Error objects, DOM elements, or functions. Read more.

You can find the source to SimpleWorker.js on GitHub.

Getting started with D3.js

(The purpose of this post is share my first experiences with D3.js. This isn’t a tutorial – there are plenty of great tutorials already out there. If you are Deeply Inspired by my post and want to learn D3.js for yourself, I’ve provided a few links to D3 learning resources at the end of this post.)

Recently I’ve been thinking that any front-end or full stack developer worth his or her weight in salt should know the basics of data visualization. Needless to say, web applications today are highly data driven – there is and will often be a necessity to present the user with a high level, yet informative, view of their data. Thus, I’ve taken the initiative to get my feet wet in data visualization by getting acquainted with D3.js.

D3.js is a JavaScript library for DOM manipulation with support for SVG. It features goodies like data binding, transforms and an expressive API. Think of it as jQuery for data driven documents. Since interactivity is the name of the game in web applications (who wants boring static data visualizations?), I figure that a DOM based solution will provide the most utilitarian value. While there’s also the canvas element, it seems to me that it’s rather tedious to build interactivity into canvas components.

Like most people, when I’m learning a new library I like to start off with a simple example to get a feel of things. I decided to create a visualization of the Monte Carlo method for approximating pi. It’s a fairly simple concept. Imagine a 2×2 square centered at the origin on the Cartesian coordinate plane – then imagine a unit circle (r=1) inscribed within that box. If you were plot a random point within the bounds of that box, the probability that the point is within the circle would be the ratio of the area of the circle to the area of the square.. i.e. P = (pi * r^2) / (4 * r^2) = pi / 4. By plotting random points and counting which of those points are within the circle, we can estimate pi ~= 4 * (# points inside circle / total # points).

To visualize this, all we really need to know is how to plot axes and points. For simplicity’s sake, I only included the first quadrant where x and y are greater than zero. The ratio used to calculate pi holds due to symmetry.


A Simple Helper for Declaratively Binding jQuery Plugins

I don’t use jQuery plugins very often, but when I do, I find myself annoyed at having to wire them up manually. Not only does it irk me that it’s making me write code, but to me there just seems to be something inherently unprincipled about requiring two levels of configuration (one in the HTML/view and one in the JavaScript code), not to mention the coupling that it incurs e.g. selectors in JS code.

Cool frameworks like Ember.js and Angular have helped solve similar problems and have improved re-usability and modularity, but we don’t always have the luxury of working on a project that uses one of these frameworks and have to work with an existing (and sometimes poorly thought out) JS + jQuery codebase.

To alleviate this pain, I’ve come up with a small helper. With the use of a special CSS class and some data-* attributes, this little helper will search through the DOM and wire up jQuery plugins for you (as long as those plugins use the standard jQuery plugin constructor pattern).

Using it is fairly simple. We use the special “plugin” css class to tell the helper to wire up this element to the jQuery plugin specified by the data-* attributes. The data-plugin attribute specifies the plugin name, and the data-{pluginName}-* attributes are passed into the plugin constructor as configuration options.

In this example, I use this helper with the jQuery UI date picker. I use the data-datepicker-change-year and data-datepicker-change-month attributes to set the changeYear and changeMonth options, respectively, which allow the user to select the year and month using select menus.

Please note that the values for the data-{plugin}-* attributes must be valid JSON expressions that must be able to be parsed by JSON.parse. This also means that if you need to pass in a string as a configuration option, you’ll have to surround it with an extra pair of quotes to make it a valid expression.

Below is a live working example of this in action!

And here’s the helper:

This won’t solve all of my problems. Sometimes there are situations where you have no choice but to wire it up manually. Nonetheless, I believe this helper works very well for the common case.