Proximity to data is the cheat code for data teams
Why organizations should keep data work embedded within product teams rather than centralizing it.
If you avoid centralizing during growth, data mesh becomes a natural evolution rather than a painful/impossible retrofit.
The Problem with Centralization
Early-stage teams move quickly because product engineers handle both features and metrics. As companies scale, centralizing data creates distance and bottlenecks. While this initially appears efficient, it breeds shadow data copies and slows decision-making.
Growth adds teams, handoffs, and process. A central data group looks efficient, but distance creeps in.
Data Mesh as Solution
Rather than retrofitting data mesh after centralizing, organizations should build toward it proactively by:
- Constructing a self-serve platform with ingestion and transformation capabilities
- Publishing shared definitions and governance layers
- Embedding data talent within product teams
AI Raises the Stakes
A critical finding: two-thirds of data leaders lack confidence in AI-powering data quality, despite 65% of organizations adopting generative AI regularly.
Failure Patterns
I’ve identified five common mesh implementation failures:
- Governance gaps halting rollouts mid-stream
- Insufficient domain-level talent density
- Attempting organization-wide adoption simultaneously
- Central teams acting as proxies rather than enablers
- Missing business ownership and accountability
Conclusion
Growing companies should resist centralizing analytics departments. Maintaining proximity between data roles and product teams prevents years of future rework while enabling AI initiatives to succeed.