In my last blog, I wrote about how we used the appropriate Regex expression in the Flume script and achieved a significant improvement in the performance of flume.
In another project, we ran into a different problem and realized that in addition to the appropriate Regex, the performance of the flume could be increased drastically in another way.
Context
We were receiving data in the form of CSV files. Each file was very small, just a few KB in size. We were passing the files to the flume, one by one. Initially, when the number of files available for testing was less, we did not notice a problem. The files used to load in a matter of a few seconds.
In the Performance testing phase, we started receiving good volumes of data. As a result, we needed to load 1000’s of files in a few seconds. That is when we noticed that flume could not load the files per our expectations. We could load hardly 60-70 files per minute, which was inadequate.
We knew that HDFS prefers dealing with small numbers of large files rather than vice versa. After some analysis, we realized that the same concept might also apply to flume.
Approach to the Problem
We then introduced a pre-processing step in which we combined multiple small files to form a single big file before passing it to the flume. The results were as expected and astonishing. Concatenating smaller files into bigger files before passing them to flume improved loading times significantly. In one instance, the performance shot up by more than 700%!
Here’s a summary of what we achieved by using different combinations.