Tomek on Software

Wednesday, June 11, 2014

Playing audio from Node.js using Edge.js

The Edge.js project allows you to use .NET Framework inside of a Node.js application. Why would you ever do that? Scott Hanselman puts it this way:


One such problem is playing audio. Node.js core does not support this functionality, so one must resort to writing a native extension in C/C++. You can dust off that Stroustrup book, tool up for memory leak detection, prepare for segfaults\avs, get yourself a bucket of coffee, and plow on to write some serious C code.

Alternatively, you can do it with two lines of C# code…

Enter Edge.js

… and then call into these two lines of C# code from Node.js using Edge.js:

   1:  var edge = require('edge');
   3:  var play = edge.func(function() {/*
   4:      async (input) => {
   5:          var player = new System.Media.SoundPlayer((string)input);
   6:          player.PlaySync();
   7:          return null;
   8:      }
   9:  */});
  11:  console.log('Starting playing');
  12:  play('dday.wav');
  13:  console.log('Done playing');

So what happens here? We are using the System.Media.SoundPlayer class from .NET Framework to play a PCM WAV file (lines 5 & 6). We wrap this logic in a C# async lambda expression (line 4). Then we use the edge.func function of Edge.js to create a JavaScript proxy around this async lambda expression (line 3). Lastly, we call that JavaScript proxy function and pass it the file name of the WAV file to play (line 12).

Edge.js allows you to call .NET functions from Node.js and Node.js functions from .NET. Edge.js takes care of marshalling data between CLR and V8. Edge.js also reconciles threading models of single threaded V8 and multi-threaded CLR, and ensures correct lifetime of objects on V8 and CLR heaps. And all that happens within a single process – Edge does not spawn separate CLR processes. Read more in the Edge.js documentation.

Coming back to playing audio. If you run the code above you will notice that the Done playing message is only printed to the console after the audio has finished playing. This is because the C# code executes on the singleton V8 thread of Node.js, and the Node.js event loop remains blocked. This is of course unacceptable…

Enter CLR threads

… so let’s fix it. We need to add two more C# lines to play our audio on a CLR thread pool thread and avoid blocking the V8 thread:

   1:  var edge = require('edge');
   3:  var play = edge.func(function() {/*
   4:      async (input) => {
   5:          return await Task.Run<object>(async () => {
   6:              var player = new System.Media.SoundPlayer((string)input);
   7:              player.PlaySync();
   8:              return null;
   9:          });
  10:      }
  11:  */});
  13:  console.log('Starting playing');
  14:  play('dday.wav', function (err) {
  15:      if (err) throw err;
  16:      console.log('Done playing');
  17:  });
  18:  console.log('Started playing');

Notice how we create a new CLR thread pool thread in line 5, and let that thread play our audio. This leaves the V8 thread free to process whatever other events need processing. Also notice that the play JavaScript proxy function can still detect when the audio has finished playing by supplying an async callback in line 14. Edge.js will invoke that async callback only after the C# async lambda expression completes, which happens when the audio playing on the CLR thread pool thread has finished playing and the thread terminates in line 8. The fact that the Node.js event loop remains unblocked is evidenced by the Started playing message from line 18 showing up before the Done playing message from line 16.

At this point we seem to be done. While we wait for the folks cranking C code to finish (ETA: one more week), we can indulge in a more fancy experiment.

Enter closures

Now that we can play a simple WAV file asynchronously, how about adding some more control over the experience. Let’s have a way to start and stop playing the audio asynchronously at any time.

This calls for one of the more interesting features of Edge.js: the ability to marshal function proxies between V8 and CLR boundary. Moreover, functions exposed from CLR to Node.js can be implemented as a closure over some other CLR state, which opens interesting possibilities. For example, allowing an instance of System.Media.SoundPlayer to be controlled from Node.js:

   1:  var edge = require('edge');
   3:  var createPlayer = edge.func(function() {/*
   4:      async (input) => {
   5:          var player = new System.Media.SoundPlayer((string)input);
   6:          return new {
   7:              start = (Func<object,Task<object>>)(async (i) => {
   8:                  player.Play();
   9:                  return null;
  10:              }),
  11:              stop = (Func<object,Task<object>>)(async (i) => {
  12:                  player.Stop();
  13:                  return null;
  14:              })
  15:          };
  16:      }
  17:  */});

We are using Edge.js to construct a createPlayer JavaScript function (line 3). This function wraps a logic in C# which acts as a factory method. It first creates an instance of System.Media.SoundPlayer (line 5). Then it returns an anonymous object with two functions on it: play and stop. Both functions are implemented as closures over the instance of SoundPlayer created in line 5, starting and stopping the playback, respectively.

This is how you can use the createPlayer function:

   1:  console.log('Creating player');
   2:  var player = createPlayer('dday.wav', true);
   4:  player.start(null, function (err) {
   5:      if (err) throw err;
   6:      console.log('Started playing');
   7:  });
   9:  setTimeout(function () {
  10:      player.stop(null, function(err) {
  11:          if (err) throw err;
  12:          console.log('Stopped playing');
  13:      });
  14:  }, 5000);

First we create a player in line 2. The player is a JavaScript object with two properties: play and stop. Both are JavaScript functions acting as proxies to the corresponding C# async lambda expressions created within createPlayer. You can invoke the play function to start playing the audio asynchronously on a CLR thread pool thread. Five seconds later, we can stop the playback by calling the stop function (line 10).

So what does it all mean?

It means that in many cases it is much easier to write a few lines of C# and use Edge.js rather than a truckload of C code to add “native” functionality to Node.js.

Dude, Edge.js surely only works on Windows, why are you wasting my time?

Dear Dude, I am pleased to inform you that Edge.js works on Mac and Linux as well as Windows. Yours truly.

Friday, May 30, 2014

JSConf.US 2014 Lapidarium

Eleanor Roosevelt once said “Great minds discuss ideas; average minds discuss events; small minds discuss people”. During JSConf  ideas are discussed. This also leaves a lot of time for conversations between people.

I had the privilege to speak during JSConf.US 2014 on the topic of in-process interop between Node.js and CLR. However, the true highlight of the event for me was watching other talks and engaging in deeper conversations with fellow attendees and speakers, some of whom I have known in person or via internet before, and some of whom I just met.

Below are selected takeaways from these presentations and conversations, in no particular order.

Sometimes the best way to connect with people is to disconnect from internet [@renrutnnej, @brianloveswords]

A capacitor can be used to stabilize circuit voltage [@digitalman2112]

One can go from knowing nothing about electronics to building a NodeBoat controlled remotely via WiFi from a MacBook or a Pebble in under 6 hours [@rhagigi, @digitalman2112, yours truly]

Immutable memory structures are good for you, and so is ClosureScript [@swannodette]

The best way to optimize GC is to not use it. At least when working with HTML5 canvas [@angelinamagnum]

JavaScript isomorphism is a trade off between the level of code reuse and the number of abstractions that need to be built [@spikebrehm]

You can live-code a JavaScript game using Rx and ES6 in under 30 minutes. It is not certain, however, which of the three enable this: Rx, ES6, or being Bodil Stokke [@bodil]

You can engage in a distributed art project with perfect strangers by shaking your iPhone like a madman [@whichlight]

You cannot engage in a distributed art project with perfect strangers by shaking your Windows Phone like a madman for lack of Web Audio API support [yours truly]

You can transpile ES6 generators and other constructs to ES5 by converting the code to a state machine, and sprinkling it with a healthy dose of pixie dust [@benjamn]

On the note of pixie dust, did I mention you can code a Little Pony JavaScript game using Rx and ES6 in under 30 mins [@bodil]

Spy satellites have elongated orbits to allow them to get close to Earth on every pass [@franksvalli]

Electric motor servos are not watertight [yours truly]

Any mobile internet of things big data set in the cloud can be efficiently sorted with jortSort [@jennschiffer]

I am leaving JSConf.US 2014 with a lot of new ideas in my head and the memory of great conversations with people. Conference, defined.

Tuesday, May 13, 2014

Script Node.js from .NET using Edge.js

The latest release of the Edge.js project adds support for scripting Node.js from a .NET application. This enables you to leverage the power of Node.js ecosystem with the thousands of NPM modules from within a CLR application.

Learn more
Get the all-inclusive Edge.js NuGet package

You can now script Node.js code (not just JavaScript) within a .NET or ASP.NET web application written in C# or any other CLR language:


The Edge.js project existed for a while, but until now it only allowed scripting CLR code from a Node.js process on Windows, MacOS, and Linux. With the latest release, you can also script Node.js code from a CLR process.

You can call .NET functions from Node.js and Node.js functions from .NET. Edge.js takes care of marshalling data between CLR and V8. Edge.js also reconciles threading models of single threaded V8 and multi-threaded CLR, and ensures correct lifetime of objects on V8 and CLR heaps.

The most powerful aspect of the Node.js scripting capability that Edge.js just enabled is that you can tap onto the many thousands of Node.js modules available both in the Node.js runtime as well as on NPM. For example, you can now create a websocket server in Node.js with a message handler in C#, all running within a single CLR process.

Getting started

Open Visual Studio 2013 and create a new .NET console application. Then add the Edge.js NuGet package to the project using the NuGet Package Manager:


Now add a using directive for Edge.js:

using EdgeJs;

And implement the body of the Main method:

static void Main(string[] args)
    var func = Edge.Func(@"
        return function(data, cb) {
            cb(null, 'Node.js ' + process.version + ' welcomes ' + data);


Compile, run, and enjoy!


Learn more

Here are more resources to get your started using Edge.js:

Get the all-inclusive Edge.js NuGet package
Scripting Node.js from CLR
What you need
Hello, world
Using built-in Node.js modules
Using NPM modules 
Handle Node.js events in C#
Manage Node.js state from C#
Script Node.js in ASP.NET

Tuesday, April 8, 2014

Mac, Windows, Ubuntu cross-platform development

“Grand[m|p]a” (have to be careful with pronouns this year), “what did you use to write cross-platform software back in 2014? No, really… do tell… Wow, amazing… It is, like, they really did not have […]?”

This post describes a cross platform development setup that proved efficient for me in the course of several cross platform projects (most notably Edge.js). As anything in technology, this is point in time: it has as much practical value for my contemporary engineers as it going to have entertainment value for my daughter.

Cutting to the chase:


I am using a MacBook Pro 13” (my shoulders are getting too old to drag along the 15”) with SDD 512GB (my ears are too old to listen to the HDD hum) and 16GB RAM (my nerves are too strung to wait for Windows to do its thing).

I am running Windows 8.1 and Ubuntu 12.04 in a VM using VMWare Fusion. Including the MacOS host, this captures most of my x-platform target.

I share the home folder on the MacBook Pro to both Ubuntu and Windows WMs. This is the quickest way to share files across the host and guest OSes.

I use Git[Hub] for sharing public artifacts between my fellow developers and my own development machines.

I use OneDrive for sharing private artifacts between my development machines. I suppose you could use DropBox, but I am psychologically biased towards OneDrive.

I use Sublime Text for a uniform code editing experience across platforms. It is sublime. And text. Plus, Commodore is so much better than Atari. Bottom line it works x-platform and has all these fancy colors, unlike vi. Could not resist.

I use Visual Studio for those infrequent tasks that require Windows specific work. It also comes really handy when profiling code. Turns out some of the code that is slow on Windows is also slow on *nix and MacOS. Yes, at the end of the day everything boils down to E=mc^2.

I use Windows Live Writer to write this post.

Friday, January 10, 2014

Workers on a shoestring in Windows Azure Web Sites

Hosting web apps in Azure, by the book

Many web apps consist of web, worker, and storage components. The web component handles HTTP traffic from clients which results in new work items (e.g. uploaded pictures that need to be resized). The worker component performs the actual work independently from interactions between the client and web component. Web and worker components exchange state using some form of external storage (e.g. a database or a queue).


In the general case, the three components are deployed to separate server farms to accommodate different scalability, reliability, computing resource, and process lifetime requirements.

For web applications like this hosted in Windows Azure, there is a natural mapping of the web, worker, and storage components onto Windows Azure concepts. The web component would be running in Windows Azure Web Sites, which is by far the most convenient way of hosting web tier code in Azure. The worker component would run as Hosted Service or a Virtual Machine. The storage component is not something you want to run yourself these days, unless you have a very compelling reason not to use one of the many hosted storage solutions available in Azure (MongoHQ, MongoLab, Azure Blob, Azure Table, SQL Azure, etc.). The details of storage farm management are abstracted away from you, and your app perceives storage as an endpoint to talk to sticking from a black box.

Given all that, a web application hosted in Azure would look like this:


Problem in paradise

While using Azure to develop a web application like the one above, the experience gap between working on the web tier hosted in Windows Azure Web Sites and the worker tier running in Hosted Services becomes apparent and annoying very quickly.

Web Sites support code deployment in seconds using git. Hosted services take minutes to update code and require it to be done from VS or Windows only command line tools. Web sites provide very convenient streaming logging feature. Getting logs out of a Hosted Service is brittle.

As a developer, I would love to have a worker tier development experience match that offered by Windows Azure Web Sites. I want quick, git-based deployment for both web and worker code. I want to deploy from Mac or Windows without discrimination. I want my streaming logs available for both web and worker, or perhaps even unified.

Let’s break some rules

To achieve my ideal development experience, I am going to run both web and worker tier code in Windows Azure Web Sites:


In general this is a big no-no most of the time, but there is a class of web applications for which having a single deployment container for both web and worker code is not entirely unreasonable. Below are some guidelines to decide if this is a good fit for your app.

The resource consumption profile of both web and worker tier should be sufficiently similar. Web tier workloads are typically IO bound: they accept HTTP requests, do some minimal processing, turn around and exchange some data with the storage tier, then respond to the client. Worker tier profiles vary from CPU bound, memory bound, to IO bound. It is reasonably safe to combine a web tier with a worker tier that is also IO bound. For example, your worker tier may be implementing a long running IO orchestration, coordinating processes across several distributed systems. If the resource consumption profiles of web and worker tiers were different, chances are high one or more classes of resources would go underutilized when the system is scaled out to handle the traffic.

The worker tier must be implemented in a way that is compatible with the process management of your web tier. In Azure Web Sites, processes are running under IIS. They are only activated when HTTP requests arrive, and the recycling policy will terminate them in pre-configured circumstances, e.g. within 15 minutes of lack of HTTP activity. You must design your worker tier to be robust enough to withstand this recycling policy. You must also mitigate the lack of control over process activation (more on this in the next section).

The benefits of running both web and worker in Windows Azure Web Sites are numerous and particularly relevant at active development phase:

  • Simplicity: the is only one artifact to deploy and manage.
  • Logging: streaming logging from both web and worker components is available in a unified form.
  • Deployment: git-deploy in seconds both web and worker code, and make the deployment atomic between web and worker tier.
  • Cross-platform: deploy from Mac or Windows
  • Configuration: quickly update configuration settings of web and worker using the same mechanism (app settings in Windows Azure Web Sites propagated as environment variables to web and worker processes).

Workers on a shoestring, the practice

There is a number of considerations for hosting worker code in Windows Azure Web Sites that must be addressed.

Initializing your worker process and keeping it running

Processes running in Windows Azure Web Sites are managed by IIS. IIS itself can be configured to start up a process on system startup and keep it always running. However, the configuration of IIS in Windows Azure Web Sites is different and locked: processes are only activated when an HTTP request arrives that targets a particular application. As a corollary, without an HTTP request the process will never run.

Moreover, IIS in Windows Azure Web Sites is configured to terminate web processes for which no HTTP requests were received during a specific period (15 minutes by default, but the application has no control over this value). A new process will only be created when another HTTP request arrives.

To have a worker process initialized and running most of the time in this environment one must:

  • Create the worker process as soon as the web process is initialized by IIS. While technically you can run the worker logic from within the web process, it is a good idea to have a process boundary between web and worker. This reduces cold startup latency of the initiating HTTP request, and also helps keep web and worker logic encapsulated in case you need to split worker from web tier later. Note that if you spawn a worker process from within a web process, they are still going to run in the same Windows job object and therefore be bound by the same process lifetime policy that IIS imposes. If IIS decides to terminate the web process given its recycling policy, the worker process will be terminated with it, no questions asked.
  • Send an HTTP request to the web application periodically to ensure the web process (and the worker process spawned by it) are running. You can use an external system to send these periodic HTTP requests, but since we are implementing workers on a shoestring, let’s hack another Windows Azure feature to do the job for us for free: Health Monitoring endpoints. Every Windows Azure Website can be configured with a Health Monitoring endpoint that Azure will periodically invoke to measure and report on latency of calls originating from various places in the world:


    As it happens, Azure invokes these endpoints every 5 minutes:


    Given that you can define up to 2 monitoring endpoint per web application in Windows Azure Web Sites, and each of these endpoints can be called from up to 3 worldwide locations for monitoring purposes, the combined frequency of periodic HTTP calls to your web site should be sufficient to reduce the risk of your worker process being down at any point.

Dealing with recycling

If you run your worker code in Windows Azure Web Sites, you have no control over when your process is recycled. This should be no huge issue from the reliability standpoint, since your worker logic should be implemented to properly handle unexpected failures anyway (recycling is no different than any other unexpected event that causes your process to terminate).

In practice, however, worker logic is often optimized for certain assumptions around typical process lifetime. For example, you may run for 30 minutes before committing in-memory results to durable storage if you assume failures are infrequent and you otherwise control the process lifetime.

Given that you know your worker process is likely to be terminated by IIS more frequently, you should design around this assumption. Make your “transactions” smaller and commit often. This way when a worker process is created anew after being recycled, it can pick up from where it left off without loosing much work.

Dealing with unexpected worker termination

What should happen when your worker process unexpectedly terminates? Since it was spawned by the web tier process, that situation must be handled by the web tier code itself. You can either implement your own worker process lifetime policy within the web process code, or you can rely on the IIS policy for handling unexpected application process failures. Most of the time you probably don’t want to roll out your own process lifetime management mechanism where one already exists. Instead, when a web process detects termination of the worker process it spawned, the web process should just terminate itself and let IIS handle this situation. When a next HTTP request arrives, the web/worker process combo will be created anew.

Keep web and worker code separate

Once you grow out of the shoestring solution described here, you will need to separate your web and worker components into separate containers. To make this easy, it is best to minimize any interaction or shared state between the web and worker processes despite they run on the same machine. Having the durable storage be the only way for web and worker to exchange data makes it so much easier to separate them when the time comes.

The only on-machine interaction between web and worker processes should be scoped to the web process spawning the worker process, and web process terminating itself upon unexpected worker process termination.

Limitations of scalability

The scalability mechanism of Windows Azure Web Sites really prevents reliable use of this shoestring mechanism on deployments involving more than 1 instance.

When your worker logic needs to be scaled out to handle the workload, you must be able to say “I need 5 instances of workers now” and have all of the 5 instances running concurrently. This is not how Windows Azure Web Site scalability works. When you say “I need 5 web instances now”, Azure really interprets it as “up to 5 instances”. The actual number of instances that will be running depends on the incoming HTTP traffic. So unless your worker scalability needs are always proportional to the number of incoming HTTP requests, you are likely to run into a situation where worker processes cannot keep up with outstanding work.

Workers on a shoestring, Mobile Chapters case study

I have successfully used the shoestring approach to run worker processes as part of the Mobile Chapters web application.

At the core, the web application accepts a book manuscript upload, stores the file in a durable store, and let’s a worker process asynchronously convert the manuscript into mobile applications for iOS, Android, and Windows Phone. The overall conversion process can take between seconds and minutes, depending on the complexity and size of the manuscript. The process is mostly IO bound, coordinating data flow and state transitions between PhoneGap Build, Azure Blob Storage, and MongoDB.

Another job the worker process performs is to periodically refresh data the mobile applications can later fetch by calling out to external services. This is scheduled to happen every 15 minutes or so, and according to logs from loggly it works as clockwork. So the mechanism described here also yields itself well to the implementation of lightweight web schedulers.


The important part is the shoestring approach provides me as a developer with a superior experience compared to what I would have to endure if I hosted worker code in a Hosted Service, without compromising the functionality of the web application.


My Photo
My name is Tomasz Janczuk. I am currently working on my own venture - Mobile Chapters ( Formerly at Microsoft (12 years), focusing on node.js, JavaScript, Windows Azure, and .NET Framework.