SCHIPHOL DATA PLATFORM
Scaling Databricks and AI adoption with Terraform inner sourcing across Schiphol's Data, AI & Analytics organisation.
2026
Cloud Data Platform Engineering
Active
Within Schiphol's Data & Analytics division, Databricks serves as the core technology for managing large-scale data. A Data Product Infrastructure (DPI) had been established using Terraform, but adoption across Data Product teams remained limited. NexOps was brought in as a kwartiermaker to independently drive DPI adoption both within and beyond the Data, AI & Analytics organisation — expanding the infrastructure, enabling inner sourcing of Terraform code, and making the platform accessible for broader use across Schiphol. Beyond the core DPI work, NexOps also drove platform-wide automation improvements and built automated security assessment workflows to ensure compliance across all data products.
METHODOLOGY
- 01
Gathered requirements from multiple Data Product teams and documented user stories on the backlog in collaboration with the Product Owner Platform Technology & IoT
- 02
Developed an integrated implementation plan covering DPI expansion, inner sourcing rollout, and team onboarding across four phases.
- 03
Built new Terraform modules and implemented Databricks Asset Bundles (DABs) as the declarative automation standard across the department — enabling teams to define jobs, pipelines, and ML workflows as code with consistent, reproducible deployments
- 04
Rolled out DABs adoption across multiple Data Product teams — providing templates, documentation, and hands-on onboarding to replace ad-hoc deployment methods with governed declarative pipelines
- 05
Enabled AI and ML workloads on Databricks — configuring GPU clusters, MLflow tracking, and Feature Store for data science teams across the organisation
- 06
Established inner sourcing of Terraform scripts — making infrastructure code available in the Schiphol code repository for self-service adoption by teams outside Data & Analytics
- 07
Co-implemented Databricks Security Analysis Tool (SAT) across the platform — providing automated security posture assessments, misconfiguration detection, and best-practice compliance reporting for all Databricks workspaces
- 08
Automated security assessment workflows — including SAT-driven compliance checks, vulnerability scanning, and policy-as-code validations — integrated into CI/CD pipelines to enforce security standards across all data products
- 09
Owned monitoring and maintenance of all Databricks workspaces across Schiphol — including cluster health checks, job failure alerting, workspace configuration audits, cost tracking, and proactive capacity management to ensure platform stability and performance
- 10
Built further infrastructure automations for environment provisioning, configuration drift detection, and automated remediation to reduce manual operational overhead
- 11
Coordinated with the Technical Solution Architect and DevOps Automation team to ensure alignment on infrastructure standards, security baselines, and deployment patterns
OUTCOMES
- →
Data Product and AI teams actively adopting the Data Product Infrastructure for their products, with clear onboarding paths and documentation
- →
Databricks Asset Bundles (DABs) established as one of the standard deployment models — teams deploying jobs and pipelines declaratively with full version control and CI/CD integration
- →
Databricks AI/ML workspaces operational — enabling data science teams to develop and deploy models with governed infrastructure
- →
Terraform source code available in the Schiphol repository — enabling inner sourcing and self-service infrastructure provisioning across the organisation
- →
Databricks Security Analysis Tool (SAT) providing continuous security posture visibility and automated compliance reporting across all workspaces
- →
Automated security assessments reducing manual compliance review effort and ensuring consistent policy enforcement across data products
- →
All Databricks workspaces continuously monitored and maintained — with proactive alerting, regular health checks, and zero unplanned platform downtime
- →
Comprehensive documentation and process definitions published in Confluence, ensuring knowledge transfer and long-term sustainability
- →
Bi-weekly progress updates delivered to Product Owner, with measurable adoption metrics tracked across teams
NEED SIMILAR RESULTS?
We deliver production-grade data platforms and AI solutions for enterprise clients. Tell us about your challenge.