Here, we present two solutions that rely on two helper functions. The sum
function calculates the total of an array of numbers, while the groupBy
function groups elements based on a specified function into an object with keys representing the result of applying the function to the data and values comprising all elements that match the particular key. For example:
groupBy(x => x % 10)([21, 15, 11, 3, 5, 1, 7])
//=> {"1": [21, 11, 1], "3": [3], "5": [15, 5], "7": [7]}
We utilize this approach to consolidate prices for each area. In both solutions, we first apply this grouping technique to the data and then flatten the resulting object into an array of area/priceList pairs like so:
[[1200, [12000, 14000, 13000]], [1300, [24000, 22000]], [2000, [30000]]]
The initial solution, which is my preferred method, involves including the current item while calculating outliers. This approach seems more intuitive (why should every pair include two outliers?) and proves to be simpler and more efficient. The implementation looks as follows:
// Code snippet provided
.as-console-wrapper {max-height: 100% !important; top: 0}
Alternatively, if we aim to exclude the current item when determining outliers, we can adopt this different strategy:
// Another code snippet provided
.as-console-wrapper {max-height: 100% !important; top: 0}
In this latter solution, I have attempted to optimize the mean
calculation by precomputing the total and subtracting the current value. However, this optimization may not yield significant improvements for small datasets, where computing the full mean within the inner loop could be more efficient.
I have not conducted benchmark tests to prove the performance superiority of these solutions over your existing approach. Nonetheless, I believe the first solution, along with Bergi's similar one, would outperform significantly due to their initial grouping step and avoidance of redundant calculations per group.
The second solution might not offer a substantial performance advantage since it requires repetitive computations.
An interesting performance enhancement worth mentioning relates to the difference between:
sum(xs) / xs.length
and
sum(xs.map(x => x / xs.length))
We can minimize division operations by conducting this computation once at the end rather than dividing at each accumulation step.