Why a Systemic Approach to Engineering Performance?

In the fast-paced world of software development, the topic of “engineering performance” often sparks a lively debate. How do you measure something as complex and nuanced as the output of a creative, problem-solving human being? It’s a question that keeps leaders and teams up at night, and frankly, there’s no single magic bullet. Every company, shaped by its unique context, culture, and challenges, grapples with finding its own answer.
At inDrive, a global ride-hailing company known for its unique peer-to-peer pricing model, we’ve learned this firsthand. With over 600 engineers across more than 70 teams in 48 countries, our rapid growth presented a clear challenge: how do we ensure predictable outcomes and maintain efficiency as we scale, without simply hiring our way out of every problem? We needed a more sophisticated approach, one that moves beyond anecdotal evidence to embrace data-driven insights.
This isn’t just about chasing numbers; for us, Performance is a core company value, alongside Purpose and People. It’s deeply embedded in who we are and how we operate. So, when we set out to understand and improve our engineering impact, we didn’t just look for metrics; we sought to build a comprehensive system that truly reflects our commitment to continuous improvement at every level.
Why a Systemic Approach to Engineering Performance?
The market has fundamentally shifted. Companies today demand greater control and better efficiency from their existing resources – their processes, their people, and their tools. This becomes critically important when you’re scaling at the pace inDrive has experienced since 2020, where simply adding more headcount is no longer a sustainable growth strategy. We recognized several key factors driving our need for a new approach:
From Growth to Predictability: The Scaling Imperative
Our journey since 2020 has been one of explosive growth, both in business reach and the sheer size of our engineering division. This kind of scale, while exciting, quickly exposes the need for robust processes and reliable tools. We needed a way to ensure that our expanding operations could consistently deliver predictable outcomes, rather than becoming a source of unmanaged complexity.
Furthermore, leadership needed reliable, data-driven insights to truly understand how teams were performing within these evolving processes. Identifying bottlenecks, celebrating successes, and making informed managerial decisions required more than gut feelings; it demanded concrete data. Crucially, any metrics we developed had to be deeply aligned with our strategic and operational goals. Only then could they transform from isolated statistics into genuine drivers of meaningful progress.
Performance as an Integrated System: More Than Just Metrics
When we talk about “performance,” it’s easy to conflate it with “productivity.” At inDrive, we see them as closely connected but distinct. Productivity, in our view, reflects the effectiveness of your delivery processes – how well you’re building things. Performance, however, measures the actual outcomes those processes produce – the impact of what you’ve built. This distinction is vital because it moves us beyond just measuring output to understanding true value creation.
Instead of focusing solely on individual productivity or tracking metrics only at the team level, we believe this complex challenge demands a comprehensive, integrated system. This system operates across all organizational levels, providing a holistic view of our engineering health and impact. We built our approach on three core pillars: structured metrics, clearly defined roles, and robust feedback loops, all supported by a unified toolset.
The Tech Metrics Tree: Cascading Goals and Insights
At the heart of our system is the “tech metrics tree,” a structured framework that organizes our metrics across five key domains: Cost Efficiency, People, Performance, Operational Excellence, and Engineering Excellence. These domains aren’t just arbitrary categories; they correspond directly to our strategic priorities.
What makes this tree particularly powerful is its tiered structure, mirroring our organizational hierarchy. From the overarching Division level (our entire tech division) down to Clusters (teams sharing product or domain expertise), individual Teams, and even the Individual Contributor level, metrics cascade, providing relevant insights at each stage. For instance, within a Cluster or Team, “Performance” and “Operational Efficiency” are further refined into areas like predictability, speed, maturity, and quality.
Imagine a metric like “Time to Market.” At the Division level, it might be a high-level strategic indicator. As you move down to a Cluster, it refines into how quickly that group delivers key features. At the Team level, it becomes even more granular, reflecting the cycle time for specific changes. This ensures that every metric, from the top down, contributes to a coherent understanding of our overall performance.
Empowering Ownership: Roles and Accountability
Metrics alone are just data points; they gain power through human ownership and accountability. In our system, each metric has a Subject Matter Expert (SME) who defines its methodology and champions its implementation. But critically, managers are directly accountable for performance within their respective teams, clusters, or the entire division.
This isn’t a task delegated to specialized roles like Agile Coaches or Project Managers. It’s an integral part of a manager’s daily work. We believe that only by embedding metrics into the core responsibilities of leadership can we achieve truly systemic and sustainable results. This approach fosters a culture where data isn’t just reported; it’s actively managed and acted upon.
Closing the Loop: Data-Driven Feedback for Continuous Improvement
The system wouldn’t be complete without robust feedback loops. These mechanisms allow teams at every level to systematically analyze their current state, make data-driven management decisions, and drive change. Whether it’s through annual strategy adjustments, technology programs, or joint initiatives, our metrics provide the signals needed to course-correct and optimize.
A great example of this in action is our engineering satisfaction survey. Launched a few years ago in response to signals about delivery performance issues, it’s now a standalone metric and a crucial source of insight. It helps us identify and drive improvements in internal processes and tools, particularly those managed by our platform teams. This shows how a metric can evolve from a simple data point into a catalyst for significant organizational change.
The Single Analytical System: A Unified Command Center
To underpin this entire ecosystem, we built the Single Analytical System. This Tableau-based dashboard environment connects all our metrics and data sources into that unified tech metrics tree, serving as the single source of truth for engineering performance across all teams, clusters, and the entire division. Think of it as our engineering command center, providing a holistic, real-time pulse check.
The concept is simple yet powerful: a set of dashboards that consolidate metrics from diverse sources—Jira, Grafana, Kibana, PagerDuty, HR systems, and various in-house tools—into a digestible, single-pager view. It gives you a pocket-sized overview of the entire engineering landscape. When a deeper dive is needed, you can jump directly to the underlying data source, like a detailed dashboard in Grafana.
Diving Deep: Understanding Each Tier’s Role
I designed the Single Analytical System with a five-tier structure, from the divisional apex down to a dedicated sandbox for deep analysis. Each tier offers unique insights tailored to specific organizational entities or provides a cross-cutting view of particular metrics.
1. Division Level
This is the strategic overview for our CTO and divisional leadership. It encompasses key technology metrics aligned with company and divisional strategy across our five core domains: Cost Efficiency (e.g., cost per ride), People (e.g., turnover, engagement), Operational Excellence (e.g., mobile performance, availability), Engineering Excellence (e.g., DORA metrics, technical debt), and Performance (e.g., time to market, lead time).
All metrics are displayed dynamically, tracking trends over time and signaling whether targets are being met. This dashboard empowers leadership to assess overall efficiency, pinpoint focus areas, and understand the influence of specific clusters on our strategic goals.
2. Cluster Level
Designed for Directors of Engineering/Product, this dashboard includes all cluster metrics organized into more specific areas: Predictability (e.g., goals progress, scope drop), Speed (e.g., lead time, velocity), Quality (e.g., incidents, SLA postmortems), Maturity (e.g., team maturity index), and Engineering Excellence (e.g., DORA metrics like cycle time, deployment frequency, change failure rate, mean time to restore).
These insights are typically analyzed monthly during cluster metric reviews and are crucial for improving performance and informing company-wide processes like annual performance reviews.
3. Team Level
Mirroring the cluster level but with a team-specific context, this dashboard is the primary management tool for Engineering Managers. It covers Predictability (e.g., sprint goal success), Speed, Quality, Maturity, and Engineering Excellence metrics tailored to individual teams.
Crucially, I designed these dashboards to be flexible, working seamlessly whether a team uses Scrum or Kanban, ensuring consistent evaluation across the board. EMs leverage this system for data-driven planning, stakeholder alignment, and continuous improvement during daily operations, sprint planning, and retrospectives.
4. Individual Contributor Level
This dashboard provides engineer-level productivity data across five key areas: collaboration, work quality, workload health, development experience, and even AI adoption. It’s a tool for Engineering Managers in their day-to-day work, helping them maintain high productivity levels, identify individual growth areas, and foster professional development.
5. Sandbox Level
The Sandbox is where our SMEs and managers can deep-dive. It contains specialized dashboards for managing specific metrics across the organization, enabling advanced analysis and experimentation. For example, an SME focusing on “Time-to-market” or “team maturity” can use this level for in-depth investigations and ad-hoc analysis.
Beyond the Numbers: Metrics as Signals, Not Goals
Building a robust system for managing, evaluating, and improving engineering performance is undeniably crucial. It provides a data-driven understanding of our current state and empowers us to launch targeted improvement initiatives at multiple levels within the organization.
However, we are acutely aware of the inherent risks of over-reliance on metrics. They can be misinterpreted, become a target to be “gamified,” or miss critical context. As part of our Engineering Excellence Team, I actively promote the mindset that metrics are not the goal themselves; they are powerful signals designed to guide management decisions. A metric might reflect short-term fluctuations, or even mislead without proper, contextual analysis.
Their real value lies in enabling comprehensive, thoughtful evaluation. They empower managers to ask the right questions during planning, performance reviews, and daily discussions, ultimately fostering a data-driven culture grounded in accountability, continuous learning, and a true understanding of impact. It’s about leveraging data to build better, faster, and smarter, ensuring our engineering efforts consistently align with our purpose.
To learn more about our analytical system, career model, and engineering practices, explore our Public Engineering Handbook.




