If you're looking to eliminate duplicates from a list, one way to do it is by using the filter method like so:
const newArr = random.filter(name => ! names.includes(name));
This approach works well for a small number of values, but may not be efficient for larger datasets.
The time complexity of this method is O(n^2), where n represents the size of both lists. Each element in random
is compared against all elements in names
.
For handling a large amount of data, consider implementing a Map
structure instead.
const names = ["Daniel", "Lucas", "Gwen", "Henry", "Jasper"];
const random = ["hi", "Lucas", "apple", "banana", "Jasper", "hello", "meow"];
const namesMap = new Map();
for (const name of names) namesMap.set(name, true);
const newArr = [];
for (const rand of random) {
if (namesMap.has(rand)) continue;
newArr.push(rand);
}
Despite the additional loop, the overall time complexity remains O(n) due to the iteration through both lists, which simplifies to O(2n) or just O(n).
Keep in mind that while this method is more efficient for larger datasets, it might sacrifice some readability.