Your dashboard is probably broken right now. How could I know that? “It’s not broken!” Go check it. It’s broken.
Interactive dashboards are complicated by nature. They have an essentially unlimited number of different possible states, and usually sit on top of many disparate systems that are constantly changing. Eventually, something is going to become misaligned. The world devolves into chaos if left unchecked.
The beautiful thing about well-conceived Business Intelligence and dashboarding tools is that they allow for fast iteration and insights — a single time. The challenge is that optimizing for free-wheeling experimentation can be counter to the goal of keeping something maintained and stable and high quality.
Often, a data visualization is built out to answer a point-in-time question for the business. But every so often, you hit gold. Stakeholders get so much value from a dashboard that they start visiting it every day. It gets added to checklists and integrated into their workflows. Suddenly, a dashboard that was initially thrown together in a few hours has become mission critical.
Eventually, something about that dashboard is going to need to change. Maybe an upstream data column is being renamed, or users want to add an additional filter control. And this is where things start to go wrong.
Somebody spends time clicking around in a WYSIWG editor until things look right. Except, there are a few edge cases that aren’t handled correctly… which isn’t discovered until after a user makes a bad decision with incorrect data.
Or, the dashboard has become so critical that nobody is confident enough to make a change without breaking it, and so nothing gets changed at all. The dashboard eventually becomes stale, and eventually people just stop using it.
This vicious cycle leads to a breakdown of trust. Every broken dashboard hurts your team’s credibility and causes the organization to drift away from data-driven decisions.
Allowing users to explore and experiment with your data is crucial for building a data-driven culture. But when a report has become a production system – a real product with real users that depend on it – it’s important to start treating it more like an application.
We can look at other production systems for inspiration. When writing code, software engineers leverage a host of best practices to manage change and keep things from breaking. And in the past few years this discipline has extended to the data pipeline layer as well, through tools like Airflow, dbt, and Terraform. It’s long overdue that we start applying new "DataOps" best practices to BI and create a true Data Application layer.
<div style="position: relative; padding-bottom: 62.5%; height: 0;"><iframe src="https://www.loom.com/share/c7000e042e134a96a501d825dd1eaea9" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div>
Here are some ways you can apply DataOps principles to your visualizations:
Important visualizations and dashboards ought to be version controlled, just like the rest of the software stack. Changes should be tracked over time and attributable to individual authors.
Developers and users feel free to make changes with confidence. Rolling back a broken change should be just as easy (or easier!) than making the change in the first place.
Version history also helps identify who the experts are for a given dashboard and gives a signal on whether the resource is up-to-date.
Code review for business intelligence is important for the same reasons it is important for application code:
Ensure that a change actually accomplishes its intent
Get feedback on implementation, structure, and usability
Use a second pair of eyes to catch bugs and unforeseen issues
Spread knowledge so that the context of a change is not limited to a single person
There should be a fast and easy way to validate that a change to your data will not break your visualization layer, and vice versa.
Major issues should be surfaced to the author of a change, before it gets in front of other users.
Unit tests also document the expectations of a system and clarify what is expected (or not expected) to stay the same over time.
Dashboards don’t live in isolation. They sit on top of a sometimes large and complex set of systems that populate data into the data warehouse.
When any part of this system changes, your BI tool should validate that all your tests pass and that your dashboards are valid.
Catching integration issues quickly makes it easy to identify the root cause before other changes are committed on top of it.
You would never deploy a risky change to a user-facing application without trying it out first in a staging environment. And yet, this happens all the time with many business intelligence tools.
Changes to visualizations and dashboards should flow through a standard deployment lifecycle, and then shipped to production using a process that is fully under your control.
At Hashboard we see production-quality Data Applications as a missing and critical layer of the modern data stack. We have customers using Hashboard to build trust in their production dashboards and are starting to see how it changes how teams operate. If you want to play around with DataOps or have other ideas about how we should be thinking about code-based analytics, get in touch!