Impala Discussion With The Product Manager (Greg Rahn)

Q&A session on specific issues that bothered us

Yesterday I had a 90 minutes e-meeting with Greg Rahn, a product manager on the team at Cloudera that contributes to Apache Impala. I want to thank him here for his time, it’s truly awesome to be the user of a product with a PM like Greg.

To our conversation I came prepared with a bunch of questions and we discussed each and every one of them to details.

In this post I’ll write a detailed summary on each question and answer — let’s start.

  1. Will it be possible in the future to upgrade Impala separately from the whole CDH?
    He said it is not done today because there are a number of cross-project dependencies (HDFS, Parquet, Hive Metastore API) which makes the testing and compatibility matrix quite large, though he understood the desire to do so.
    We discussed efforts on the separation of compute and storage for the interactive analytical queries — (1 Impala cluster, 1 HDFS + YARN cluster).
    Separated clusters will solve the upgrades problem. They are also heading towards this compute/storage strategy and the difficulties in their inter-project dependencies.
  2. Will Impala support queries over SQLServer, Oracle or Elastic? (like Spark DataSource or Presto Connector)
    Probably not in the near term because its not a common request from the users. I think the fact Impala isn’t going to be more pluggable is a problem because I see that need in our organization. They recommend to export the source data into HDFS.
    We discussed the fact that PrestoDB is doing it and he doubted the actual efficiency of a cross-source join. Greg said that pushing down the predicates to the datasource is the easy part but when it comes to optimizing what joins can be run in the remote datasource it’s a different issue. He asked to know more about their planner and I sent him this article: Introduction to Presto Cost-Based Optimizer
  3. How Impala handles selective queries over big tables?
    Data clustering (since v2.9) — which is basically optimization to the parquet predicate pushdowns by sorting the data, more details can be found here: Faster Performance For Selective Queries — but we already knew about that one.
    Another interesting thing he told me about was the future use of a concept called “Index Pages” in Parquet.
  4. How can I see the actual query execution time? (and not the session time)
    Greg agreed there’s probably a bug in the CM or in the metrics Impala show and that they need to fix it. He suggested to look in the query timeline in the profile of the query because we might find the actual execution time there. He understood the very problematic situation of not being able to correctly measure execution times.
  5. How can I write Impala-optimized parquet files with Spark?
    He is aware of the fact that a large number of his customers use Spark to write their Parquet files, even if Impala generates the best Parquet for itself. Cloudera engineers working on Parquet and Spark see this as important for users as well.
  6. Can you expose the Query Profile in a JSON format?
    The query profile is given as plain text or thrift format, it makes it really hard to analyze it. He totally gets this pain and promised to make the team work on that for the next versions. It’s not very complicated to make it a JSON and it’s really valuable to all the users.
  7. Impala needs a better documetation.
    We need more details around execution and reading logs/query profiles to determine issues and for troubleshooting. He agrees with that and I hope we’ll see actual documentation for Impala “under the hood” soon enough.
  8. REST API for queries (similar to Presto)
    He agrees with me that it’s essential and think a REST API is a good idea (maybe also to be used by Hue). I don’t know if it’ll happen soon but I can tell he thinks thats important. He asked about the way the Presto REST API works. It’s described in this document.
  9. How does he suggest us to repartition tables, for example, from hourly partitions to monthly ones?
    I explained him our partition-explosion problem and that we need to move from hourly to monthly partitions in some tables. The interesting thing is — he suggested the exact solution as we thought of: repartitioning the tables and then create views with substring on the dt partition to make the change transparent to the user.
  10. Why do queries reach 99% fast and then get stuck on the last 1%?
    I described to Greg this weird problem we observed. ~99% of the files in a query are being scanned fast and then the last ~1% are waiting for scanners for a very long time. He didn’t seem to know something like that. I told him my conjecture that this is some sort of “scanner starvation” as a result of the way Impala prioritises resources among queries. He asked Engineering and I hope we’ll get an answer soon.
  11. Why there is no metadata cache cleaning mechanism? (TTL, LRU, etc.)
    The answer for this question was pretty interesting. In his view, the current method of handling metadata is fundamentally wrong. He reckons they should aspire to get Impala metadata to a “zero-touch” state. It means the user doesn’t have to do anything in order to keep the metadata up to date (i.e. REFRESH / INVALIDATE METADATA). This is actually a main project in Impala these days and he referred me to messages from Todd Lipcon in the Impala mailing list. He also sent me this very detailed shared Google Doc: Proposal for new approach to Impala catalog and the following email: Update on catalog changes
    Bottom line: they are investing a lot of resources in improving the catalog, so there’s what to expect for.
  12. Where is Impala heading?
    Other than bug fixes, improvements and the catalog thing — Greg is also planning to add Impala a better automatic resource control. He told me that one of their goals is to make Impala’s memory estimation for queries much better and automatically measure the cluster resources in order to let queries run without running out of memory. I hope we will see some interesting and useful features in Impala in 2019.

That’s all for now, we will have another call in about a month or so.

To sum up, I think Impala is heading in the right direction but one thing that bothers me is that they’re not planning to make it more pluggable. I was very impressed of Greg. He has a very rich technological background and I’m 100% sure he deeply understands the users.

I like data-backed answers