About us
ALGOTEQUE is an IT consultancy firm that helps startups, mid-sized and large corporations to create and deliver innovative technologies.
Our team has a successful track record in designing, developing, implementing, and integrating software solutions (AI, ML, BI, Web, Automation) for Telecom, Energy, Bank, Insurance, Pharma, Automotive, Industry, e-commerce. We deliver our services both in fixed-price and time-and-materials models, helping our customers achieve their business and IT strategies.
Job Description
We are looking for a talented and experienced ETL Data Engineer in Matillion with a strong background in AWS services, DevSecOps, and data engineering. If you are passionate about building and maintaining robust data pipelines, collaborating with cross-functional teams, and ensuring data quality and integrity, we want to hear from you!
Key Responsibilities:
🛠 Design, Develop, and Maintain Data Pipelines: Extract data from various sources and populate data lakes and data warehouses.
🔄 Data Transformation: Develop and implement data transformation rules to make data understandable and actionable.
🤝 Collaborate: Work closely with Product Analysts, Data Scientists, and Engineers to identify and transform data.
✅ Data Governance: Implement data quality checks and maintain data catalogs in collaboration with the data governance team.
📊 Orchestration & Monitoring: Utilize orchestration, logging, and monitoring tools to build resilient pipelines.
🧪 Test-Driven Development: Apply TDD methodologies when building ELT/ETL pipelines.
📈 Data Analysis: Analyze data to ensure its accuracy and relevance.
📝 Version Control: Use Git for version control, understanding various branching strategies.
🚀 Agile Teamwork: Operate as part of an agile team, contributing to iterative development cycles.
📚 Documentation: Create and maintain technical documentation as needed.
Required qualifications
3-6 years of relevant experience
Solid experience with AWS services (S3, IAM, Redshift, Sagemaker, Glue, Lambda, Step Functions, CloudWatch)
Proficiency with Matillion and DevSecOps
Experience with platforms like Databricks, Dataiku
Strong skills in Python/Java and SQL (Redshift preferred)
Proficient in Jenkins, CloudFormation, Terraform, Git, Docker
2-3 years of experience with Spark/PySpark
Ability to work effectively in cross-functional teams
Education:
B. Tech. or higher degree in Computer Science or a related field is required.
Why Join Us? Be a part of a dynamic and innovative team where your contributions will directly impact the success of our data-driven projects. We offer a collaborative work environment, opportunities for professional growth, and the chance to work on cutting-edge technologies.
Apply Now:
If you're ready to take your career to the next level and contribute to meaningful projects, we encourage you to apply today!