Job Description - Data Architecture Assistant Vice President (250001CO)
Job Description
· Develop and implement a robust data architecture strategy that aligns with business goals and supports long-term data needs.
· Ensure seamless integration across different data sources, systems, and platforms to enable a unified data ecosystem.
· Design data systems that are scalable, flexible, and capable of handling increasing data volumes and complexity.
Lead the selection of appropriate data platforms, tools, and technologies to optimize performance, security, and cost-efficiency.
Data Modeling:
· Lead the design and implementation of scalable and efficient data models that support operational and analytical needs.
· Ensure data models are consistent and accurate across the organization, maintaining a single source of truth.
· Continuously refine and optimize data models to improve performance and adaptability to changing business requirements.
Establish and enforce industry best practices for data modeling and ensure alignment with business processes.
· Lead the creation and optimization of automated ETL (Extract, Transform, Load) pipelines to handle large-scale data processing.
· Enhance and scale the data infrastructure to support efficient data storage, retrieval, and processing at an enterprise level.
· Implement automated solutions to streamline data workflows and reduce manual intervention, increasing overall efficiency.
Work closely with data consumers and scientists to ensure the infrastructure supports advanced analytics and machine learning models.
Innovation & Technology Adoption:
· Stay up-to-date with the latest trends in data architecture, engineering, and analytics to identify and implement innovative solutions.
· Research and integrate cutting-edge tools and platforms to enhance data management capabilities and streamline processes.
· Encourage a team culture focused on continuous improvement, supporting experimentation with new technologies and methodologies.
Develop and execute a technology roadmap to ensure the team is using the best tools available to meet both current and future data needs.
Optimization & Performance Management:
· Implement and maintain continuous monitoring of data architecture and systems to ensure optimal performance and minimal downtime.
· Identify bottlenecks and inefficiencies in data pipelines and workflows, and implement solutions to optimize processing speed and resource usage.
· Manage data infrastructure cost-effectively by optimizing resource usage, reducing waste, and implementing more efficient data storage solutions.
Ensure data systems and architectures are built to scale efficiently, accommodating growth in data volume, users, and complexity without compromising performance.
Qualifications
Educational Requirements: Master's Degree in Data Engineering, Computer Engineering, Information Systems, or a related discipline is highly preferred. Advanced degrees provide deeper technical and strategic insights, enabling the professional to drive complex data solutions and innovations at the organizational level.
Special Certification or Training Required:
· Certified Data Management Professional (CDMP) – Essential for expertise in data architecture and management practices is preferable.
· AWS Certified Solutions Architect for based data architecture and engineering is preferable.
Required Industry Experience:
· 8-12 years of experience in Data Architecture, Modeling, and Engineering, with a proven track record of managing complex data systems.
· Demonstrated expertise in managing security and architecture systems within a complex organizational environment.
· Extensive hands-on experience in designing and implementing data architecture solutions that align with business needs and ensure scalability, security, and efficiency.
Technological Requirements:
· Expertise in data modeling tools (e.g., Erwin, IBM InfoSphere Data Architect, Microsoft Visio).
· Proficient in Data Vault modeling, dimensional modeling, and entity-relationship modeling.
· Hands-on experience with AWS (Amazon Web Services), Google Cloud Platform, or Microsoft Azure for cloud-based data architecture and engineering.
· Familiarity with cloud data warehouses like Snowflake or BigQuery.
· Proficient in both relational databases (e.g., Oracle, SQL Server, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
· Experience in data pipelines using Apache Spark, Apache Kafka, or Airflow
· Proficient in data security practices, utilizing tools like Data Loss Prevention (DLP), encryption, and identity & access management (IAM).
· Knowledge of compliance standards such as GDPR, CCPA, and HIPAA.
· Strong programming skills in SQL, Python, and Java.
· Experience with shell scripting and automation frameworks for data engineering tasks
· Familiarity with Hadoop, Apache Spark, and other big data tools for processing large datasets.
· Proficient in data lakes and data warehouses for big data environments.
Language Requirements: Fluent in English.