Solvedjavascript no-param-reassign with props, again


A comment regarding this change:

While I agree in general that function parameters should not be mutated, this change gave me a lot of errors in code that I consider as safe. For example:

['foo', 'bar', 'qux', 'foo'].reduce((accumulator, value) => {
  accumulator[`_${value}`] = true;
  return accumulator;
}, {});

I know, I can change this rule in my .eslintrc or for this particular line of code. But this reduce pattern is quite common in JavaScript. Lodash has also a bunch of functions that work this way. There’s little risk mutating the accumulator since the reduce operation as a whole is non-mutating.

While the { "props": true } rule might catch some errors, it mostly produces false positives in my codebases. I’m already writing in a non-mutating, mostly pure functional style.

Related, the example in isn’t helpful IMHO. The “good” and “bad” examples do not seem to match. The “bad” example mutates an object. The “good” example illustrates input normalization. Which is a different thing IMHO. ;)

Perhaps there are cases where people accidentally mutate parameters just to normalize input. Then a better example would be:

// bad
function f1(obj) {
  obj.key = obj.key || 1; // Mutate to normalize
  // Use obj.key
// good
function f2(obj) {
  const key = obj.key || 1; // Normalize without mutating
  // Use key

But I think people are mutating parameters not just accidentally, but more commonly to make changes on objects. For example:

function doSomethingWithObject(obj) {
  obj.hello = 'world';

Then, a good alternative would be a pure function. It could clone the input, then augment it. Or use a library for immutable data that has smart lists and hashes, like Immutable.js.

Assuming this case, a better example would be:

// bad
function f1(obj) {
  obj.key = 1;
// good
function f2(obj) {
  return Object.assign({}, obj, { key: 1 }); // Shallow clone

Or with prototypal delegation:

function f2(obj) {
  const obj2 = Object.create(obj);
  obj2.key = 1;
  return obj2;

(Or with a library like Immutable.js…)

I think the bottom line is that the topic of immutability is too complex to be covered in a simple example. And the rule no-param-reassign: [2, { "props": true }] cannot cover the nuances here. I propose to reconsider the switch to { "props": true }.

39 Answers

✔️Accepted Answer

This is a rule we use at Airbnb - in the reducer case, if the body of your reducer is return Object.assign({}, accumulator, { [key]: value }) then you'll avoid mutation as well as satisfy the linter (both in spirit and in fact).

Mutation should be avoided in all its forms, and our style guide tries to be consistent with this belief.

Other Answers:

The performance of the return Object.assign({}, ...) solution is O(n^2). OPs solution is O(n). There must be a REALLY good reason to choose a polynomial implementation over a linear one. What do we get for such a sacrifice?

If the answer is "to make the linter happy" then that's the tail wagging the dog. This is a corner case that eslint does not (yet) cover (which is OK, we evolve it as things come up). For now we can /* eslint-disable no-param-reassign */ until eslint handles this properly (I would argue it's a JS idiom).

If the answer is "to make the code more functional", then I ask you: where do we draw the line? JavaScript is not functional. It is imperative and object-based. We can write pure functions and pass them around because functions are first-class citizens. And that's really great when the situation is appropriate. But to say "all functions should always be pure" is to chose to ignore the realities of the language in favour of some ideal which really isn't applicable here.

What do you think happens under the covers when you call Object.assign()? The JavaScript interpreter/compiler can't easily make "functional" assumptions about your code, so it has to fall back to a naive implementation:

// Pseudo-code
function assign(target, ...other) {
  foreach(map in other) {
    foreach(pair in map) {
      target[pair.key] = pair.value;
  return target;

Interpreters in true functional languages like Haskell, Lisp, and Scala can make assumptions, though, because of the mathematical properties of those languages. You can have your cake and eat it, too. You can write naive declarative code without incurring the cost of a naive imperative implementation. The interpreter can safely make assumptions about what you mean and fudge the true implementation on the metal.

Now we have to ask ourselves "what is the purpose of this linting rule?" To prevent people making this mistake:

function sayHelloTo(person) {
  // Don't waste memory lol! Just store the capitalised name here! 
  // With an exclamation point! WCGW? = + '!';
  console.log('Well hello ' +;

let person = { name: 'Mr. Bean' };

sayHelloTo(person); // Well hello MR. BEAN!
savePersonToDatabase(person); // Saved as { name: 'MR. BEAN!' }

The person implementing sayHelloTo(person) mutates an object she/he knows nothing about! That's dangerous! I want my linter to warn me and my team about issues like this so we can catch them sooner.

Compare to this:

function truthify(array) {
  return array.reduce((prev, cur) => {
    prev[`_${cur}`] = true;
    return prev;
  }, {});

truthify() is a pure function which returns an object based on the provided array. To achieve this I create an empty object, immediately mutate it to build it up, and then return it. I don't mutate array, and my new object cannot be mutated until after I'm done with it! So then why should I write my code as though that is a possibility? And again incur the cost of an O(n^2) operation? I just don't see the benefit here.

To wrap up, just because you can write pure functions in JavaScript doesn't mean JavaScript is a functional language. Don't pretend the interpreter is written to turn naive declarative code into a sensible imperative implementation. Do functional when it makes sense and use it to your advantage. But don't follow an ideal that conflicts with the problem at hand and a practical solution just because the linter complained about it.

You're right, my mistake. I've updated my original comment.

Performance is the last and least important concern when programming - it's easy to speed up clean code, but it's hard to clean up fast code.

If you are using the object spread

const obj = ['key1', 'key2'].reduce((accumulator, key) => ({
  [key]: "value",
}), {})


  key1:  "value",
  key2:  "value",

A note about Object.assign: it doesn't avoid mutation. It mutates the given target object, which in your example is accumulator.

See an example in Chrome console:

> var obj = {};
> var newObj = Object.assign(obj, {a: 1});
> obj
    Object {a: 1}
> newObj
    Object {a: 1}

To avoid mutation, it should be written as Object.assign({}, accumulator, { [key]: value }). Beware: this does a copy operation of the accumulator on each iteration. In my opinion there's no clear benefit of doing this and it makes the performance worse.

More Issues: