Senior Enterprise Analytics Platform Administrator

lean it inc. India
Remote
Apply
AI Summary

Lean It Inc. seeks a Senior Enterprise Analytics Platform Administrator to own our enterprise analytics ecosystem, managing Terraform deployments, Generative AI infrastructure, and enforcing robust security measures.

Key Highlights
Own enterprise analytics ecosystem
Manage Terraform deployments and Generative AI infrastructure
Enforce robust security measures
Key Responsibilities
Manage Databricks on AWS environment
Design infrastructure for Generative AI
Enforce reproducibility and system auditing
Lead FinOps reviews
Manage strategic vendor relationships
Drive Data Mesh culture and data privacy frameworks
Technical Skills Required
Terraform Databricks AWS Generative AI ELT Fivetran Sigma Computing MLflow GPU-optimized compute clusters Role-Based Access Control AWS networking AWS security
Benefits & Perks
Fully remote work opportunity
8+ years experience
Nice to Have
Databricks Certified Data Engineer Professional
AWS Solutions Architect Professional
Generative AI concepts and modern ML techniques
Agile/scrum methodologies and lifecycle management tools

Job Description


Title: Senior Platform Administrator

Work Mode: Remote / Work From Home

Experience: 8+ yrs

Hire Mode: Contract

 

Role Summary

We are seeking a Senior Platform Administrator to own our enterprise analytics ecosystem. You will manage everything from Terraform deployments to our emerging GenAI infrastructure. Beyond technical execution, you will serve as a strategic guardian against engineering inertia, enforcing a discipline of rational adaptability to ensure investments are driven by objective future value rather than past expenditures. This is a fully remote opportunity open to candidates across India.


Principal Duties and Responsibilities

  • Architect and maintain a multi-region Databricks on AWS environment, enforcing "secure by default" networking via customer-managed VPCs, PrivateLink, Transit Gateway, and strict IAM cross-account roles.
  • Design the underlying infrastructure for Generative AI, including configuring MLflow for model governance, Databricks Vector Search for RAG applications, and GPU-optimized compute clusters for inference.
  • Enforce reproducibility by managing all platform resources (workspaces, clusters, jobs) via Terraform and Databricks Asset Bundles (DAB), ensuring no manual changes exist in Production.
  • Systematically audit legacy technical choices with a "clean slate" perspective. You are expected to seek disconfirming evidence that contradicts the status quo to prevent confirmation bias and the escalation of commitment to outdated strategies.
  • Lead monthly FinOps reviews using AWS Cost Explorer and Databricks system tables to differentiate between necessary investment and sunk costs, ensuring we do not irrationally commit resources to failing projects.
  • Own strategic vendor relationships (Databricks, AWS, Fivetran, Sigma), holding partners accountable for successful outcomes and resolving support blockers aggressively rather than passively accepting roadmap delays.
  • Drive a Data Mesh culture by treating data assets as Data Products. Define clear contracts, SLOs (Service Level Objectives), and publication standards to decouple producers from consumers.
  • Design and operationalize data privacy frameworks to satisfy GDPR, CCPA, PDPB (India), and SOC2 requirements, including automated workflows for Right to Be Forgotten (RTBF) requests and PII masking within the Lakehouse architecture.
  • Administer the Sigma Computing environment, overseeing workspace architecture, version tagging strategies, and the promotion path of analytics assets from Dev to Production.
  • Design and enforce comprehensive Role-Based Access Control (RBAC) policies across Unity Catalog (Catalogs, Schemas, Tables) and AWS IAM, ensuring least privilege access while maintaining operational velocity.
  • Manage high-volume data replication pipelines using Fivetran and AWS DMS, ensuring data fidelity, efficient schema drift handling, and cost-optimized sync frequencies across heterogeneous sources.
  • Perform rigorous code reviews on high-impact pipelines, enforcing best practices such as Z-ordering, Liquid Clustering, and Schema Evolution, with the authority to reject sub-standard code.
  • Partner with analytics, data science, and business teams to design scalable data solutions that align to strategic priorities and unlock advanced use cases.
  • Drive innovation by introducing emerging technologies, frameworks, and best practices into engineering workflows to improve scalability, automation, and productivity.
  • Collaborate effectively in a fully remote setting, maintaining clear asynchronous communication, documentation standards, and virtual team engagement across time zones.


Qualifications and Key Skills

  • 8+ years in Data Engineering/Architecture, with 3+ years of daily production mastery in Databricks on AWS.
  • Deep familiarity with global and Indian data compliance standards (GDPR, CCPA, PDPB/DPDP Act) and experience implementing technical controls for PII protection (e.g., dynamic views, column-level encryption) in a distributed data environment.
  • Expert-level proficiency with modern ELT and BI tools, specifically Fivetran (connector configuration, transformation), AWS DMS, and Sigma Computing (administration, version control, security).
  • Deep understanding of Role-Based Access Control (RBAC) models within distributed data systems, specifically regarding Unity Catalog grants and AWS IAM Identity Center.
  • Deep expertise in AWS networking (Transit Gateway, VPC Endpoints, Private Subnets) and security (SCPs, Encryption, IAM identity center).
  • Proven production experience deploying CI/CD pipelines via GitHub Actions and Databricks Asset Bundles (DAB).
  • Demonstrated experience managing enterprise software vendors, including contract utilization, technical escalation, and roadmap alignment.
  • A track record of making high-stakes technical decisions, including examples of when you recommended stopping a project to save resources (managing Sunk Cost bias).
  • Databricks Certified Data Engineer Professional (strongly preferred) or AWS Solutions Architect Professional.
  • Familiarity with Generative AI concepts and modern ML techniques (LLMs, transformers, ensemble methods, deep learning frameworks) and how they integrate into data engineering workflows.
  • Strong analytical problem-solving mindset, capable of identifying patterns, optimizing processes, and unlocking new business value through scalable solutions.
  • Demonstrated ability to navigate ambiguity—driving clarity, alignment, and momentum in evolving requirements and complex stakeholder landscapes.
  • Working knowledge of agile/scrum methodologies and use of agile lifecycle management tools (e.g., Jira, Confluence).
  • Exceptional communication and leadership skills to mentor engineers, influence stakeholders, and drive adoption of data products and practices.
  • Comfort working in a fully remote environment with self-discipline, strong documentation habits, and effective virtual collaboration skills.

 

 If interested, share your resume on sadiya.mankar@leanitcorp.com

 


Similar Jobs

Explore other opportunities that match your interests

Senior IT Support Specialist

Networking
2d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Associate

CareerXperts Consulting

India
Visa Sponsorship Relocation Remote
Job Type Part-time
Experience Level Entry level

smart cyber recruiter

India

Senior Network Deployment and Operations Engineer

Networking
5d ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

nexus consulting

India

Subscribe our newsletter

New Things Will Always Update Regularly