Today we are releasing new versions of Node:
First and foremost these releases address the current OpenSSL vulnerability CVE-2014-0224, for both 0.8 and 0.10 we've upgraded the version of the bundled OpenSSL to their fixed versions v1.0.0m and v1.0.1h respectively.
Additionally these releases address the fact that V8 UTF-8 encoding would allow
unmatched surrogate pairs. That is to say, previously you could construct a
Buffer as UTF-8, send and consume that string in another process and it would
fail to interpret because the UTF-8 string was invalid.
Note, the results encoded by V8 in this case are exactly what was passed into the encoding routine. There is no overflow, underflow, or the inclusion of other arbitrary memory, merely an unmatched UTF-8 surrogate resulting in invalid UTF-8.
As of these releases, if you try and pass a string with an unmatched surrogate
pair, Node will replace that character with the unknown unicode character
(U+FFFD). To preserve the old behavior set the environment variable
NODE_INVALID_UTF8 to anything (even nothing). If the environment variable is
present at all it will revert to the old behavior.
This breaks backward compatibility for the specific reason that unsanitized
strings sent as a text payload for an RFC compliant WebSocket implementation
should result in the disconnection of the client. If the client attempts to
reconnect and receives another invalid payload it must disconnect again. If
there is no logic to handle the reconnection attempts, this may lead to a
denial of service attack. For instance
socket.io attempts to reconnect by
// Prior to these releases: new Buffer('ab\ud800cd', 'utf8'); // <Buffer 61 62 ed a0 80 63 64> // After this release: new Buffer('ab\ud800cd', 'utf8'); // <Buffer 61 62 ef bf bd 63 64> // This is an explicit conversion to a Buffer, but the implicit // .write('ab\ud800cd') also results in the same pattern websocket.write(new Buffer('ab\ud800cd', 'utf8')); // This would result in the client disconnecting.
Node's default encoding for strings is
UTF-8, so even if you're not
Buffers out of strings, Node may be doing so under the
hood. If what you're passing is not actually
UTF-8 then when you call
.write(str) you could be specific and say
.write(str, 'binary') which
signals Node to pass the string through without interpreting it.
Thanks to Node.js alum Felix Geisendörfer for finding, getting the fixes upstreamed, and helping with the testing and mitigation. Also for helping to inform and improve the process for Node.js security issues.
To float these fixes in your own builds you can apply the following patch with
Node.js is vulnerable to a denial of service attack when a client sends many pipelined HTTP requests on a single connection, and the client does not read the responses from the connection.
We recommend that anyone using Node.js v0.8 or v0.10 to run HTTP servers in production please update as soon as possible.
- v0.10.21 http://blog.nodejs.org/2013/10/18/node-v0-10-21-stable/
- v0.8.26 http://blog.nodejs.org/2013/10/18/node-v0-8-26-maintenance/
This is fixed in Node.js by pausing both the socket and the HTTP parser whenever the downstream writable side of the socket is awaiting a drain event. In the attack scenario, the socket will eventually time out, and be destroyed by the server. If the "attacker" is not malicious, but merely sends a lot of requests and reacts to them slowly, then the throughput on that connection will be reduced to what the client can handle.
There is no change to program semantics, and except in the pathological cases described, no changes to behavior.
If upgrading is not possible, then putting an HTTP proxy in front of the Node.js server can mitigate the vulnerability, but only if the proxy parses HTTP and is not itself vulnerable to a pipeline flood DoS.
For example, nginx will prevent the attack (since it closes connections after 100 pipelined requests by default), but HAProxy in raw TCP mode will not (since it proxies the TCP connection without regard for HTTP semantics).
This addresses CVE-2013-4450.
A carefully crafted attack request can cause the contents of the HTTP parser's buffer to be appended to the attacking request's header, making it appear to come from the attacker. Since it is generally safe to echo back contents of a request, this can allow an attacker to get an otherwise correctly designed server to divulge information about other requests. It is theoretically possible that it could enable header-spoofing attacks, though such an attack has not been demonstrated.
- Versions affected: All versions of the 0.5/0.6 branch prior to 0.6.17, and all versions of the 0.7 branch prior to 0.7.8. Versions in the 0.4 branch are not affected.
- Fix: Upgrade to v0.6.17, or apply the fix in c9a231d to your system.
A few weeks ago, Matthew Daley found a security vulnerability in Node's HTTP implementation, and thankfully did the responsible thing and reported it to us via email. He explained it quite well, so I'll quote him here:
There is a vulnerability in node's
http_parserbinding which allows information disclosure to a remote attacker:
In node::StringPtr::Update, an attempt is made at an optimization on certain inputs (
node_http_parser.cc, line 151). The intent is that if the current string pointer plus the current string size is equal to the incoming string pointer, the current string size is just increased to match, as the incoming string lies just beyond the current string pointer. However, the check to see whether or not this can be done is incorrect; "size" is used whereas "size_" should be used. Therefore, an attacker can call Update with a string of certain length and cause the current string to have other data appended to it. In the case of HTTP being parsed out of incoming socket data, this can be incoming data from other sockets.
Normally node::StringPtr::Save, which is called after each execution of
http_parser, would stop this from being exploitable as it converts strings to non-optimizable heap-based strings. However, this is not done to 0-length strings. An attacker can therefore exploit the mistake by making Update set a 0-length string, and then Update past its boundary, so long as it is done in one
http_parserexecution. This can be done with an HTTP header with empty value, followed by a continuation with a value of certain length.
The attached files demonstrate the issue:
$ ./node ~/stringptr-update-poc-server.js &  11801 $ ~/stringptr-update-poc-client.py HTTP/1.1 200 OK Content-Type: text/plain Date: Wed, 18 Apr 2012 00:05:11 GMT Connection: close Transfer-Encoding: chunked 64 X header: This is private data, perhaps an HTTP request with a Cookie in it. 0
The fix landed on 7b3fb22 and c9a231d, for master and v0.6, respectively. The innocuous commit message does not give away the security implications, precisely because we wanted to get a fix out before making a big deal about it.
The first releases with the fix are v0.7.8 and 0.6.17. So now is a good time to make a big deal about it.
If you are using node version 0.6 in production, please upgrade to at least v0.6.17, or at least apply the fix in c9a231d to your system. (Version 0.6.17 also fixes some other important bugs, and is without doubt the most stable release of Node 0.6 to date, so it's a good idea to upgrade anyway.)
I'm extremely grateful that Matthew took the time to report the problem to us with such an elegant explanation, and in such a way that we had a reasonable amount of time to fix the issue before making it public.