Limited Capacityfull Adding this to your schedule will put you on the waitlist.
Data models are often designed around data availability and development convenience, but not always around how users actually consume data. As analytical products scale, this misalignment can significantly increase infrastructure costs. In this talk, I will share the transformation of the Seller Deep Dive Dashboard, which at one point became one of the most expensive dashboards to operate due to its data model design. The initial architecture relied on a fully denormalized table, where dimensions were merged directly into the fact table powering the dashboard. While this simplified early development, the model became increasingly inefficient as data volume grew and slowly changing dimensions expanded the dataset. By analyzing which KPIs users actually queried and how frequently they were used, we redesigned the model around a usage-driven star schema. The new architecture separates metrics into two fact tables:
A core fact table containing roughly 80% of the most frequently used KPIs, representing only 20% of the total data weight
A secondary fact table containing the remaining less frequently used but heavier metrics, queried only when required
This change dramatically reduced the amount of data scanned by the majority of queries. As a result, the dashboard evolved from being one of the most expensive dashboards to run, costing roughly €3500 over two weeks, to becoming one of the most efficient, reducing costs to approximately €350 over the same period—a 90% reduction in FinOps costs while maintaining full analytical capability.