Hotspotting In Hadoop — Impala Case Study

Why Small Frequently-Queried Tables Shouldn’t Be Stored In HDFS?

Adir Mashiach
4 min readApr 13, 2018

In this post I’ll describe a weird problem we had with our Impala service and how we investigated it, solved it and the conclusions from the whole experience. In my opinion this is relevant not only for Impala but for every processing platform that operates over HDFS.

A really hot spot, not in Hadoop

Something’s wrong

We have a Kibana dashboard with cool charts we’ve built that show us interesting data on the Impala queries from the last 14 days. Maybe I’ll write a post in the future about how we do BI on our Impala performance with the ELK stack.

One of the charts in the dashboard shows the 75, 90 and 95 percentiles of the queries duration. Thanks to this chart, a few weeks ago we noticed that there is a sudden jump in the queries duration in the last 2–3 days. We have another chart showing us the number of exceptions per hour, and we saw a correlated jump in that chart too.

That moment we knew we had a problem. Now it’s time for a little investigation.

Diagnosing The Problem

We examined the most common exceptions from all the ones we got in those 2–3 days and we found something interesting.

The main exception was ‘backend impala daemon is over its memory limit’. You get that exception when a query needs a certain impala daemon for its execution but that specific daemon is at 100% memory usage. By the way, this exception doesn’t tell you which one is that daemon.

The next exceptions were ‘unreachable impalad(s): X, Y, Z’ which you get when the statestore’s health check to certain daemons is negative. In those exceptions you can see which are the unreachable daemons. We noticed that the same 3 daemons appear in those exceptions over and over again.

Then we checked those daemons in the Cloudera Manager and we saw that their memory usage is almost 100%. The first thing we did was reset the 3 daemons. It didn’t work, quickly their memory usage jumped to 100% again.

What could be the problem? We decided to analyze the queries in the last 7 days to see if maybe there is a difference between the last 2–3 days to the days before them.

Analyzing The Queries

That’s an interesting process. First of all, I need to say that most of our Impala queries are not ones that an analyst writes and sumbits. Most of the queries are generated by BI tools or automatic alerts systems. It means that we can easily check if there is something different by looking at the queries’ templates.

So that’s what we did. We extracted the templates of the queries from the last 7 days and performed a simple ‘group by count’. The point was to see what are the most common templates in the past 2–3 days compared to the days before them.

And just as we suspected, we found a query template that in the past 3 days appeared about 10,000 times compared to 150 times in the 4 days before them.

Then we asked ourselves, what does this query template have to do with the 3 impala daemons that keep reaching 100% memory usage?

The Hotspotting

We looked at the query template and we saw a very long query with a lot of LIKE operators and CONCAT()s. It looked like this:

SELECT x, y, z FROM table t
WHERE t.blah LIKE CONCAT(‘%’, ‘xblahblah’, ‘%’)
OR t.blah LIKE CONCAT(‘%’, ‘yblahblah’, ‘%’)
OR t.blah LIKE CONCAT(‘%’, ‘zblahblah’, ‘%’)
-- (hundreds of those...)

That’s a really stupid way of using the LIKE operator, and that’s kind of a heavy query, but still — it doesn’t explain the 3 daemons issue.

And then we checked the table in the query and we saw something weird. The table size was about 100mb. Less than the size of an HDFS block.

We had an idea what caused the memory explosion in those 3 daemons.

Impala is leveraging data locality so we guessed the 3 replications of the table’s HDFS block are stored in the exact same 3 daemons.

So with a simple hadoop fsck {path} -files -blocks -locations we found the block replications’ locations and it confirmed our assumption.

Thousands of queries (with the template described above) were executed only in those 3 impala daemons, to leverage data locality, and caused the memory usage explosion. That’s hotspotting.

The Solution

We moved that table to an RDBMS and that solved the problem. We could also increase the replication factor of this file but we thought it’s a bad practice because, in our opinion, Hadoop is not meant for such small tables.

Conclusions and Improvements

We had 3 conclusions/improvements from that incident:

  1. We created a new chart in the Cloudera Manager that shows us the memory usage per impala daemon and we placed it in the Impala dashboard. That way we can identify daemons with relatively high memory usage and diagnose the problem earlier.
  2. Analyzing the queries in order to investigate a problem can give you a really good clue about what’s going on.
  3. Small and frequently-queried tables shouldn’t be stored in HDFS. It’ll cause hotspotting. Don’t get me wrong, we have many small tables — but they’re not queried that frequently (10k queries in 2–3 days). And if you choose to store them in HDFS make sure the replication factor is high enough.

We always try to learn from such issues because the solution itself is not enough. The important thing after resolving a technical issue is to learn how to prevent such issues in the future.

I once heard a nice saying: “Those who shorten their way in the routine, will find themselves lost in an emergency.”

--

--