When it comes to handling items in an array, utilizing a reduce accumulator for pushing is often the faster option, especially if the array grows in size. However, some argue that sacrificing code readability for slight performance gains may not be worth it unless dealing with large datasets.
In the scenario between the two methods mentioned, I personally prefer using the reduce method as it provides clarity on how the data is being processed in the example. Reduce functions by condensing arrays and giving back the desired results effectively.
var options = [
{ name: 'One', assigned: true },
{ name: 'Two', assigned: false },
{ name: 'Three', assigned: true },
];
console.time();
var assignees = options.reduce((a, o) => (o.assigned && a.push(o.name), a), []);
console.timeEnd();
console.log(assignees);
A couple of key points to note about the above code for better understanding:
- The && operator returns o.assigned if false or executes the push operation if true.
- The comma operator completes the previous expression and returns the mutated variable a. The wrapping brackets are essential to distinguish the comma as part of the arrow function.
Although this code is clear and readable, it can be challenging for those unfamiliar with these operators. An alternative approach without mutations could be using
o.assigned ? a.concat([o.name]) : a
, but this considerably impacts performance and memory usage when scaled up. Similarly, employing the spread operator like
[...a, o.name]
is inefficient and advised against in conjunction with reduce.
Regarding flatMap, eliminating empty elements from sparse arrays is more of a consequence of utilizing flat
and flatMap
. Without multi-dimensional arrays, this practice may seem unnecessary despite its commonality. A suitable use case would look something like the following:
var options = [
{ names: ['One', 'Two', 'Three'], assigned: true },
{ names: ['Four', 'Five', 'Six'], assigned: false },
{ names: ['Seven', 'Eight', 'Nine'], assigned: true },
];
console.time();
var assignees = options.flatMap((o) => (o.assigned) ? o.names : []);
console.timeEnd();
console.log(assignees);
Compared to using reduce in this context, you'd need to either add the spread operator like a.push(...o.names)
, or employ a forEach loop such as
o.names.forEach(name => a.push(name))
. It's evident that these options are less elegant compared to
flatMap
for this scenario. Integrating
.flat()
after
.filter(...).map(...)</code lacks the efficiency and elegance offered by <code>flatMap
.
For situations where reusing the filtered list multiple times is probable, filtering the options first and then performing the necessary actions might be favorable as demonstrated by @Bergi. However, simpler needs can be addressed by creating a custom function utilizing a for loop tailored to your specific naming conventions. Each method has been benchmarked below:
var options = Array.from({length:1000000}, (v, i) => ({
name: String(i),
assigned: Math.random() < 0.5
}));
function getAssignees(list) {
var assignees = [];
for (var o of list) if (o.assigned) assignees.push(o.name);
return assignees;
}
console.time('a');
var assigneesA = options.reduce((a, o) => (o.assigned && a.push(o.name), a), []);
console.timeEnd('a');
console.time('b');
var assigneesB = options.flatMap((o) => (o.assigned) ? [o.name] : []);
console.timeEnd('b');
console.time('c');
var assigneesC = getAssignees(options);
console.timeEnd('c');
console.time('d');
var assigneesD = options.filter(o => o.assigned).map(o => o.name);
console.timeEnd('d');
After running the benchmarks repeatedly, a few observations become apparent:
- Depending on the operating system and browser used,
reduce
typically shows superior speed, occasionally surpassed by a basic loop, while filter
+map
closely trails in performance and flatMap
consistently falls behind.
- Significant speed discrepancies are only noticeable with a million records or more, with milliseconds differences even at that scale, hence insignificant for practical purposes.
Instead of fixating on marginal speed improvements, attention should be directed towards overall code structure, frequency of execution, and maximizing conceptual clarity. While the filter
+map
approach excels in terms of cleanliness and readability, learning the intricacies of reduce can prove beneficial for complex transformations and performance-critical data operations in the future.