Thu, 31 Jul 2014 19:00:00 UTC - vulnerability

A memory corruption vulnerability, which results in a denial-of-service, was identified in the versions of V8 that ship with Node.js 0.8 and 0.10. In certain circumstances, a particularly deep recursive workload that may trigger a GC and receive an interrupt may overflow the stack and result in a segmentation fault. For instance, if your work load involves successive JSON.parse calls and the parsed objects are significantly deep, you may experience the process aborting while parsing.

This issue was identified by Tom Steele of ^Lift Security and Fedor Indunty, Node.js Core Team member worked closely with the V8 team to find our resolution.

The V8 issue is described here

It has landed in the Node repository here:

And has been released in the following versions:

The Fix

The backport of the fix for Node.js is

diff --git a/deps/v8/src/isolate.h b/deps/v8/src/isolate.h
index b90191d..2769ca7 100644
--- a/deps/v8/src/isolate.h
+++ b/deps/v8/src/isolate.h
@@ -1392,14 +1392,9 @@ class StackLimitCheck BASE_EMBEDDED {
   explicit StackLimitCheck(Isolate* isolate) : isolate_(isolate) { }

-  bool HasOverflowed() const {
+  inline bool HasOverflowed() const {
     StackGuard* stack_guard = isolate_->stack_guard();
-    // Stack has overflowed in C++ code only if stack pointer exceeds the C++
-    // stack guard and the limits are not set to interrupt values.
-    // TODO(214): Stack overflows are ignored if a interrupt is pending. This
-    // code should probably always use the initial C++ limit.
-    return (reinterpret_cast<uintptr_t>(this) < stack_guard->climit()) &&
-           stack_guard->IsStackOverflow();
+    return reinterpret_cast<uintptr_t>(this) < stack_guard->real_climit();
   Isolate* isolate_;


The best course of action is to patch or upgrade Node.js.


To mitigate against deep JSON parsing you can limit the size of the string you parse against, or ban clients who trigger a RangeError for parsing JSON.

There is no specific maximum size of a JSON string, though keeping the max to the size of your known message bodies is suggested. If your message bodies cannot be over 20K, there's no reason to accept 1MB bodies.

For web frameworks that do automatic JSON parsing, you may need to configure the routes that accept JSON payloads to have a maximum body size.

Thu, 31 Jul 2014 18:39:10 UTC - release

2014.07.31, Version 0.8.28 (maintenance)

  • v8: Interrupts must not mask stack overflow. (Fedor Indutny)

Source Code:

Macintosh Installer (Universal):

Windows Installer:

Windows x64 Installer:

Windows x64 Files:

Linux 32-bit Binary:

Linux 64-bit Binary:

Solaris 32-bit Binary:

Solaris 64-bit Binary:

Other release files:




Hash: SHA1

3e6fcb94f48c911774632d33e98e2d635b136b24  node-v0.8.28-darwin-x64.tar.gz
1254edd0e7778555e2ae5861bc228ab4bf3397ac  node-v0.8.28-darwin-x86.tar.gz
a17fc55576af625ec12e366b30c4a44870a5f194  node-v0.8.28-linux-x64.tar.gz
835f784d38675a789ee269e08f266a2ab46aa09c  node-v0.8.28-linux-x86.tar.gz
39750b9b4d792e42b85dd0a620e781de8de23471  node-v0.8.28-sunos-x64.tar.gz
1d44e2e66219617ba8565c9a7ef05e999aaab34f  node-v0.8.28-sunos-x86.tar.gz
77f94aa76d204fa9e8e9b906dd045b157221a1f2  node-v0.8.28-x86.msi
ea2b94d75658914ddfe6a536ef27d1c016156e2d  node-v0.8.28.tar.gz
34d7b1561e32a207ed1de8089305d95773ee3762  node.exe
8fb6bb05c84b5621124e164877b32941ad7a441f  node.exp
e1cba9b0aafbd9185a84e612df002a95e58d5e68  node.lib
2f74410204ce93db1ee98ee4c8a560dfaa4a02cb  node.pdb
ae0f6c7296bd36c91cb8335c07c1f27d95fb056a  x64/node-v0.8.28-x64.msi
0d2a88f7e331b25d16b30e37d768ecce7aafc23a  x64/node.exe
374539be666e92b9b0756e9a9d199012dcc3da3e  x64/node.exp
70f0fa0d13730a5ce261a0153eb665a918544e1a  x64/node.lib
94000769cd6448b2523e71bb68628a7c10b0ea3c  x64/node.pdb
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools -


Thu, 31 Jul 2014 18:11:40 UTC - release

2014.07.31, Version 0.10.30 (Stable)

  • uv: Upgrade to v0.10.28

  • npm: Upgrade to v1.4.21

  • v8: Interrupts must not mask stack overflow.

  • Revert "stream: start old-mode read in a next tick" (Fedor Indutny)

  • buffer: fix sign overflow in readUIn32BE (Fedor Indutny)

  • buffer: improve {read,write}{U}Int* methods (Nick Apperson)

  • child_process: handle writeUtf8String error (Fedor Indutny)

  • deps: backport 4ed5fde4f from v8 upstream (Fedor Indutny)

  • deps: cherry-pick eca441b2 from OpenSSL (Fedor Indutny)

  • lib: remove and restructure calls to isNaN() (cjihrig)

  • module: eliminate double getenv() (Maciej Małecki)

  • stream2: flush extant data on read of ended stream (Chris Dickinson)

  • streams: remove unused require('assert') (Rod Vagg)

  • timers: backport f8193ab (Julien Gilli)

  • util.h: interface compatibility (Oguz Bastemur)

  • zlib: do not crash on write after close (Fedor Indutny)

Source Code:

Macintosh Installer (Universal):

Windows Installer:

Windows x64 Installer:

Windows x64 Files:

Linux 32-bit Binary:

Linux 64-bit Binary:

Solaris 32-bit Binary:

Solaris 64-bit Binary:

Other release files:




Hash: SHA1

4a16fc8768594cad5b4635e709afa035c2ffc0a1  node-v0.10.30-darwin-x64.tar.gz
92111c64e874c2bee24f35aa4bf8ba665d76e73e  node-v0.10.30-darwin-x86.tar.gz
35c3a2156e4ed7561a68efc70ee73069afe47174  node-v0.10.30-linux-x64.tar.gz
d7f222b3519df636be8e47e8ddb2c2ecb03f4060  node-v0.10.30-linux-x86.tar.gz
866541db248ced6b076e9fa13d6125159007a6a6  node-v0.10.30-sunos-x64.tar.gz
6abad0a47c67a5eec24ba3022108b53bcb00b376  node-v0.10.30-sunos-x86.tar.gz
0824d4d86ee38b58871344676162d797f4d74abb  node-v0.10.30-x86.msi
9f20513f167c0e8ebb7ea5e9f633272e72e3bec4  node-v0.10.30.pkg
bcef88d76c39147c79a28aa9e5d484564eb3ba7e  node-v0.10.30.tar.gz
50ad72fd5646d92ae9afcd39ffb084f6de925903  node.exe
22bd794611288027a6a1d995295f8f2ea092cb9e  node.exp
88cfd5e9d42d006df4c0709e3b10ec2d198578d9  node.lib
0f753fee3f82e98c232017a2977bb730bf73b42e  node.pdb
ea4c28e8c5f6eaa296be82aba8f52d5a90cd9633  openssl-cli.exe
abe93255f729922b55449f8c867ee9e82ae32cad  openssl-cli.pdb
4843e84a9170f503289df25029a32a1876106e7f  pkgsrc/nodejs-ia32-0.10.30.tgz
d283ef358257cc22ab421158d82906d388b024a8  pkgsrc/nodejs-x64-0.10.30.tgz
674491bd761a4c3e7485d2284e110ad8e7974bc0  x64/node-v0.10.30-x64.msi
b88ff4594e46a6e5403c84cd36805b8cf644f1df  x64/node.exe
a77dd6018caca01cdebfad41062ae62b4d9e73b9  x64/node.exp
46b4b56efa01d4feed4ea6a45b21e7e2fca6e5c8  x64/node.lib
d922b71c9a900b3e8ead4ae3c4ed262612c92085  x64/node.pdb
17678b0cba89ccec0478085257016b2b9c3f8c59  x64/openssl-cli.exe
428b5fa970ef89265fa738062af401b7f4f0216f  x64/openssl-cli.pdb
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools -


Tue, 29 Jul 2014 21:00:00 UTC - tjfontaine - Community

Node.js is reaching more people than ever, it's attracting new and interesting use cases, at the same time as seeing heavy adoption from traditional engineering departments. Managing the project to make sure it continues to satisfy the needs of its end users requires a higher level of precision and diligence. It requires taking the time to communicate and reach out to new and old parties alike. It means seeking out new and dedicated resources. It means properly scoping a change in concert with end users, and documenting and regularly check pointing your progress. These are just some of the ways we're working to improve our process, and work to deliver higher quality software that meets our goals.


One of the big things we've wanted to do is to change the way the website works, which is something I've mentioned before. It should be a living breathing website whose content is created by our end users and team. The website should be the canonical location for documentation on how to use Node.js, how Node.js works, and how to find out what's going on in the Node community. We have seeded the initial documentation with how to contribute, who the core team is, and some basic documentation of the project itself. From there we're looking to enable the community to come in and build out the rest of the framework for documentation.

One of the key changes here is that we're extending the tools that generate API documentation to work for the website in general. That means the website is now written in markdown. Contributions work with the same pull-request way as contributions to Node itself. The intent here is to be able to quickly generate new documentation and improve it with feedback from the community.

The website should also be where we host information about where the project is going and the features we're currently working on (more about that later). But it's crucial we communicate to our end users what improvements will be coming, and the reasons we've made those decisions. That way it's clear what is coming in what release, and also can inspire you to collaborate on the design of that API. This is not a replacement for our issue tracking, but an enhancement that can allow us to reach more people.


Which brings us to the conversation about features. During the Q & A portions of the Node.js on the Road events there are often questions about what does and doesn't go into core. How the team identifies what those features are and when you decide to integrate them. I've spent a lot of time talking about that but I've also added it to the new documentation on the site.

It's pretty straight forward, but in short if Node.js needs an interface to provide an abstraction, or if everyone in the community is using the same interface, then those interfaces are candidates for being exposed as public interfaces for Node. But what's important is that the addition of an API should not be taken lightly. It is important for us to consider just how much of an interface we can commit to, because once we add the API it's incredibly hard for us to change or remove it. At least in a way that allows people to write software that will continue to work.

So new features and APIs need to come with known use cases and consumers, and with working test suites. That information is clearly and concisely present on the website to reach as wide of an audience as possible. Once an implementation meets those requirements it can be integrated into the project. Then and only then, when we have an implementation that meets the design specification and satisfies the test suite, will we be able to integrate it. That's how we'll scope our releases going forward, that's how we'll know when we're ready to release a new version of Node. This will be a great change for Node, as it's a step forward on moving to an always production ready master branch.

Quality Software

And it's because Node.js is focused on quality software and a commitment to backwards compatibility that it's important for us to seek ways to get more information from the community about when and where we might be breaking them. Having downstream users test their code bases with recent versions of Node.js (even from our master branch) is an important way we derive feedback for our changes. The sooner we can get that information, the more test coverage we can add, the better the software we deliver is.

Recently I had the opportunity to speak with Dav Glass from Yahoo!, and we're going to be finding ways to get automated test results back from some larger test suites. The more automation we can get for downstream integration testing the better the project can be at delivering quality software.

If you're interested in participating in the conversation about how Node.js can be proactively testing your software/modules when we've changed things, please join the conversation.

Current release

Before we can release v0.12, we need to ensure we're providing a high quality release that addresses the needs of the users as well as what we've previously committed to as going into this release. Sometimes what can seem like an innocuous change that solves an immediate symptom, doesn't actually treat the disease, but instead results in other symptoms that need to be treated. Specifically in our streams API, it can be easy to subtly break people while trying to fix another bug with good intent.

This serves as a reminder that we need to properly scope our releases. We need to know who the consumers are for new APIs and features. We need to make sure those features' test cases are met. We need to make sure we're adopting APIs that have broad appeal. And while we're able to work around some of these things through external modules and experimenting with JavaScript APIs, that's not a replacement for quality engineering.

Those are the things that we could have done better before embarking on 0.12, and now to release it we need to fix some of the underlying issues. Moving forward I'm working with consumers of the tracing APIs to work on getting a maintainable interface for Node that will satisfy their needs. We'll publicly document those things, we'll reach out to other stakeholders, and we'll make sure that as we implement that we can deliver discretely on what they need.

That's why it's important for us to get our releases right, and diagnose and fix root causes. We want to make sure that your first experience with 0.12 results in your software still working. This is why we're working with large production environments to get their feedback, and we're looking for those environments and you to file bugs that you find.

The Team

The great part about Node's contribution process and our fantastic community is that we have a lot of very enthusiastic members who want to work as much as possible on Node. Maybe they want to contribute because they have free time, maybe they want to contribute to make their job easier, or perhaps they want to contribute because their company wants them to spend their time on open source. Whatever the reason, we welcome contributions of every stripe!

We have our core team that manages the day to day of Node, and that works mostly by people wanting to maintain subsystems. They alone are not solely responsible for the entirety of that subsystem, but they guide its progress by communicating with end users, reviewing bugs and pull requests, and identifying test cases and consumers of new features. People come and go from the core team, and recently we've added some documentation that describes how you find your way onto that team. It's based largely around our contribution process. It's not about who you work for, or about who you know, it's about your ability to provide technical improvement to the project itself.

For instance, Chris Dickinson was recently hired to work full time on Node.js, and has expressed interest in working on the current and future state of streams. But it's not who employs Chris that makes him an ideal candidate, but it will be the quality of his contributions, and his understanding of the ethos of Node.js. That's how we find members of the team. And Chris gets that, in his blog about working full time on Node.js he says (and I couldn't have said it better myself):

I will not automatically get commit access to these repositories — like any other community member, I will have to continually submit work of consistent quality and put in the time to earn the commit bit. The existing core team will have final say on whether or not I get the commit bit — which is as it should be!

Exactly. And not only does he understand how mechanism works, but he's already started getting feedback from consumers of streams and documenting some of his plans.

In addition to Chris being hired to work full time on Node.js, Joyent has recently hired Julien Gilli to work full time with me on Node. I'm really excited for all of the team to be seeking out new contributors, and getting to know Chris and Julien. They're both fantastic and highly motivated, and I want to do my best to enable them to be successful and join the team. But that's not all, I've been talking to other companies who are excited to participate in this model, and in fact themselves are looking to find someone this year to work full time on Node.js.

Node.js is bigger than the core team, it's bigger than our community, and we are excited to continue to get new contributors, and to enable everyone. So while we're working on the project we can't just focus on one area, but instead consider the connected system as a whole. How we scale Node, how we scale the team, how we scale your contributions, and how we integrate your feedback -- this is what we have to consider while taking this project forward, together.

← Page 1

Page 3 →