欧洲杯投注软件

Grafana Labs at KubeCon: Awesome Query Performance with Cortex

Published: 30 May 2019 RSS

At KubeCon + CloudNativeCon in Barcelona last week, Weaveworks’ Bryan Boreham and I did a deep-dive session on , an OSS Apache-licensed . A horizontally scalable, highly available, long term storage for Prometheus, Cortex powers Grafana Cloud欧洲杯投注软件’s hosted Prometheus.

欧洲杯投注软件During our talk, we focused on the steps that we’ve taken to make Cortex’s query performance awesome.

欧洲杯投注软件Cortex embeds the Prometheus PromQL engine, and mates it to Cortex’s own scale-out storage engine. This allows us to stay feature compatible with queries users can run against their Prometheus instance. Queries are handled by a set of stateless, scale-out query jobs. You could add more to “increase” performance. But this only improved the handling of concurrent queries. Single queries were still handled by a single process.

欧洲杯投注软件It may sound obvious, but we have to be able to answer queries in Cortex using only the information in the user’s query – the label matchers and the time range. First we find the matching series in the query using the matchers. Then we find all the chunks for those series, for the time range we’re interested in, using a secondary index. We then fetch those chunks into memory, and merge and deduplicate them in a format that the Prometheus PromQL engine can understand. Finally, we pass this to the PromQL engine to execute the query and return the result to the user.

Query Path 2018

In the past year, we’ve made several optimizations.

Optimization 1: Batch Iterators for Merging Results

欧洲杯投注软件Cortex stores multiple copies of the data you send it in heavily-compressed chunks. To run a query, you have to fetch this data, merge it, and deduplicate it.

The initial technique used to do this was very naive. We would decompress the data in memory and merge the decompressed data. This was very fast, but it used a lot of memory and caused the query processes to OOM (out of memory) when large queries were sent to them.

So we started using iterators to dedupe the compressed chunks in a streaming fashion, without decompressing all the chunks. This was very efficient and used very little memory – but the performance was terrible. We used a heap to store the iterators, and operations on the heap (finding the next iterator) were terribly expensive.

We then moved to a technique that used batching-iterators. Instead of fetching a single sample on every iteration, we fetched a batch. We still used the heap, but we had to use it significantly less. This was almost as fast as the original method, and used almost the same amount of memory as the pure iterator-based approach.

Optimization 2: Caching… Everything

As explained, Cortex first consults the index to work out what chunks to fetch, then fetches the chunks, merges them, and executes the query on the result.

欧洲杯投注软件We added a series of memcached clusters everywhere possible – in front of the index, in front of the chunks, etc. These were very effective at reducing peak loads on the underlying data and massively improved the average query latency.

Caching

In Cortex, the index is always changing. We had to tweak the write path to ensure that the index could be cached. We had to make sure the ingesters held on to data that they had already written to the index for 15 minutes, so that entries in the chunk index could be considered valid for up to 15 minutes.

More Caching

Optimization 3: Query Parallelization and Results Caching

欧洲杯投注软件We added a new job in the query pipeline: the query frontend.

Query Frontend

This job is responsible for aligning, splitting, caching, and queuing queries.

Aligning:欧洲杯投注软件 We align the start and end time of the incoming queries with their step. This helps make the results more cacheable. Grafana 6.0 does this by default now.

Splitting:欧洲杯投注软件 We split queries that have a large time range into multiple smaller queries, so we can execute them in parallel.

Caching:欧洲杯投注软件 If the exact same query is asked twice, we can return the previous result. We can also detect partial overlaps between queries sent and results cached, stitching together cached results with results from the query service.

Queuing:欧洲杯投注软件 We put queries into per-tenant queues to ensure a single big query from one customer doesn’t denial of service (DoS) smaller queries from other customers. We then dispatch queries in order, and in parallel.

Other Optimizations

欧洲杯投注软件We’ve made numerous other tweaks to improve performance.

We optimized the JSON marshalling and unmarshalling, which has a big effect on queries that return a very high number of series.

欧洲杯投注软件We added HTTP response compression, so users on the ends of slower links can still get fast query responses.

We have hashed and sharded index rows to guarantee a better load distribution on our index. Plus that means we can look up smaller rows in parallel.

In short, every layer of the stack has been optimized!

And the results are clear: Cortex, when run in Grafana Cloud, achieves <50ms average response time and <500ms 99th percentile response time across all our production clusters. We’re pretty proud of these results, and we hope you’ll notice the improvements too.

But we’re not done yet. We have plans, in collaboration with the Thanos team, to further parallelize big queries at the PromQL layer. This should make Cortex even better for high cardinality workloads.

Most large-scale, clustered TSDBs talk about ingestion performance, and in fact, ingesting millions of samples per second is hard. We spent the first three years of Cortex’s life talking about ingestion challenges. But we have now moved on to talking about query performance. I think this is a good indication of the maturity of the Cortex project, and makes me feel good.

Read more about Cortex on our blog欧洲杯投注软件 and get involved .

Related Posts

Grafana Labs' David Kaltschmidt shares his best tips for creating foolproof Kubernetes dashboards so no one loses sleep over on calls ever again.
The KubeCon + CloudNativeCon caravan heads back to Europe this month, bringing an expected 10,000 cloud native enthusiasts to Barcelona’s Fira Gran Via. Already registered and packed your bags? Here’s where you will find Grafana Labs team members during the conference.
The rest of the city may still have been in a post-Oscars haze, but nearly 300 monitoring mavens gathered in downtown L.A. bright and early on Feb. 25 to kick off GrafanaCon 2019.

Related Case Studies

After trying to DIY, Wix embraces Grafana Cloud

欧洲杯投注软件Metrics is an important part of Wix’s culture, so Grafana Cloud was chosen to monitor mission-critical systems.

"It doesn’t make sense, price-to-performance, to do it ourselves, so we were looking for a fully-managed solution from a team that had experience running monitoring at scale."
– Alex Ulstein, Head of Monitoring, Wix

DigitalOcean gains new insight with Grafana visualizations

The company relies on Grafana to be the consolidated data visualization and dashboard solution for sharing data.

"Grafana produces beautiful graphs we can send to our customers, works with our Chef deployment process, and is all hosted in-house."
– David Byrd, Product Manager, DigitalOcean

Grafana enhances end user experience for Apica Systems

欧洲杯投注软件The company uses Grafana alongside its SaaS product to detect availability and performance issues before they affect users.

"There have been all kinds of dashboard solutions, but Grafana changed that game in terms of being extremely dynamic and hooking in whatever you like."
– Sven Hammar, CEO, Apica Systems