As I receive data from a server, my goal is to categorize it into various buckets for presentation.
The dataset appears in the following format:
Array [
Object {
"name": "1.00",
"value": 17,
},
Object {
"name": "1.01",
"value": 10,
},
Object {
"name": "1.5",
"value": 9,
},
Object {
"name": "1.20",
"value": 8,
},
Object {
"name": "1.30",
"value": 7,
},
Object {
"name": "0.80",
"value": 5,
}
]
In this context, the name represents the "size" and the value signifies the number of occurrences within the system. For example, there are 5 records with size "0.80" and 8 records with size "1.20".
The objective is to sort the data as follows:
[
{key: 'Under .30', value: 11},
{key: '.30 to .39', value: 3},
{key: '.40 .49', value: 2},
...
...
{key: '.90 to .99', value: 1},
{key: '1.00 to 1.09', value: 3},
{key: '1.10 to 1.19', value: 2},
...
...
{key: '5.00 to 5.09', value: 5},
{key: '5.00 to 5.09', value: 1},
...
{key: 'Over 10', value: 3},
{key: 'Other', value 21}
]
Here, the key denotes the size, while the value represents the total occurrences for that specific grouping.
Essentially, I aim to:
- Convert each row's name into a float value
- Determine if a range exists for this value based on its name
- If not, create the range and add the row's value to it
- If the range already exists, increment its value using the row's value
I am currently considering pre-creating all the arrays statically from 0.30 to 10+ and then iterating through them with multiple if statements, which would be over 100 cases.
Any assistance or alternative approaches to tackling this issue would be immensely appreciated!