I need to scan very large JSONL files efficiently and am considering a parallel grep-style approach over line-delimited text.
Would love to hear how you would design it.
I need to scan very large JSONL files efficiently and am considering a parallel grep-style approach over line-delimited text.
Would love to hear how you would design it.
How large is very large? Would it be something that
jqcan’t do? Is it purely string search or JSON-tree search?Generally you would want to get file size, split it into ranges which can be read as valid UTF-8. Feed each range into reader thread. Can be inefficient for HDDs because each thread will try to access random location on disk forcing needle to jump back and forth. Also you’ll need reread ranges at split point with some positive and negative offset in case desired content got split. Things are getting much more complicated if you want JSON-tree grep. Branches may get split from parent nodes across multiple ranges.