Skip to content

Data Architect

  • Remote
    • Campinas, São Paulo, Brazil

Job description

We are seeking a highly skilled and experienced Platform Data Architect to build and maintain our next-generation data platform. This role requires a deep understanding of cloud-native technologies and a strong ability to architect solutions that support multiple products and teams within a complex, integrated environment. You will be a key player in defining and implementing our data strategy, ensuring data quality, scalability, and security across our entire platform. This is a hands-on role requiring collaboration with various engineering and product teams.

Responsibilities:

· Implement a scalable, reliable, and secure data platform architecture leveraging cloud-native services (GCP, Fivetran, Dataflow, Pub/Sub, BigQuery or equivalent).

· Define and implement data governance policies and procedures, including data quality, security, and access control (e.g., Data Catalog, Collibra, Alation, Dataplex).

· Optimize SQL queries and database performance across relational and NoSQL databases.

· Design and implement data warehousing/lakehouse solutions.

· Clearly articulate chosen platform(s) and technologies.

· Collaborate with product and engineering teams to gather requirements, translate them into technical specifications, and deliver solutions.

· Develop and maintain technical documentation for the data platform.

· Mentor and guide junior data engineers and architects.

· Participate in Agile software development processes.

Job requirements

· 5+ years of experience as a Data Architect, with a minimum of 3 years specifically focused on platform-level data architecture in a microservices or distributed system environment. (This emphasizes platform expertise)

· Extensive experience with cloud-native technologies, specifically GCP, AWS or Azure.

· Deep understanding and hands-on experience implementing data governance frameworks and using associated tools, e.g. Data Catalog, Collibra, Alation.

· Proven experience designing backend data solutions for high-performing RESTful APIs.

· Expert-level proficiency in SQL and database performance tuning across relational (PostgreSQL, MSSQL, MySQL) and NoSQL databases.

· Proficiency in at least two programming languages (Java, C#, Python preferred). Shell scripting experience is required.

· Experience with Big Data technologies and Massively Parallel Processing databases (BigQuery, Snowflake, Synapse).

· Experience with multi-tenanted data environments.

· Strong understanding of data modeling techniques (relational, dimensional, NoSQL). Experience with various data schemas (ontologies, taxonomies, DTDs).

· Experience with Agile methodologies (Scrum, Kanban).

· Excellent communication, collaboration, and documentation skills.

Bonus Points:

· Experience with data observability tools (e.g., Monte Carlo, DataDog).

· Experience with CI/CD pipelines for data infrastructure.

· Experience with data mesh principles.

or