Overview of the main idea (3 sentences)

The main idea introduced in this paper is that instead of processing query plans one tuple at a time or all tuples at a time (volcano model and materialized model, respectively), we should do a mix and process them a few multiple tuples at a time (100,1000 tuples,depends). This is more in line with how CPUs can be fast - they rely on pipelining some amount of work in advance so they don’t have to be interrupted.

Other smaller ideas discussed throughout the paper are the concept of writing branchless code to achieve better and more predictable execution performance because of how modern CPUs work. The authors also show that carrying around selection vectors/bitmaps is totally worth it even though it may sound wasteful due to their extra memory use. Point is - CPU is more typically a bottleneck so optimize more for that (but beware of memory too).

All of the ideas behind this paper are about making better use of modern CPUs. More batching, less interrupting.

Key findings / takeaways from the paper (2-3 sentences)

System used in evaluation and how it was modified/extended (1 sentence)

MonetDB with Vectorized Model and MonetDB with Materialized Model are compared. The vectorized model handily beats the materialized model version.

Workload Evaluated (1 sentence)

TPCH-100.