The purpose of this code is to generate a list of arrays containing words based on their length, and each word includes an array of subwords that can be spelled using the characters from the word itself. I am in the process of optimizing the code to handle 81,000 words more efficiently. Currently, the program works but takes several hours to complete. I am seeking ways to improve the performance and speed up the processing time.
Below is a sample from the words.txt file:
do
dog
eat
go
god
goo
good
tea
The expected output in subwords.json format for the above text file is shown below:
[
[],
[],
[
{
"word": "do",
"subwords": [
[],
[],
[]
]
},
{
"word": "go",
"subwords": [
[],
[],
[]
]
}
],
[
{
"word": "dog",
"subwords": [
[],
[],
[
"do",
"go"
],
[
"god"
]
]
},
{
"word": "eat",
"subwords": [
[],
[],
[],
[
"tea"
]
]
},
{
"word": "god",
"subwords": [
[],
[],
[
"do",
"go"
],
[
"dog"
]
]
},
{
"word": "goo",
"subwords": [
[],
[],
[
"go"
],
[]
]
},
{
"word": "tea",
"subwords": [
[],
[],
[],
[
"eat"
]
]
}
]
]
Here is the snippet of code responsible for reading words.txt and generating the subwords.json output:
Code snippet goes here...
I have been focusing on improving the isSubword function for better efficiency without success so far. While the current version does work over time, I am determined to enhance its speed significantly. Any suggestions or guidance on this would be highly appreciated!