๐ค What you will do
- Design, develop, maintain, and optimizeย ETL pipelinesย for real-time and batch data processing;
- Work with technologies such asย Redis, Kafka, Python, ELK,ย and ScyllaDB to ensure efficient data processing;
- Improve the performance and reliability of existing data pipelines and processing jobs;
- Implement scalable real-time data processing systems using Redis, Kafka, and column-based databases;
-
Create meaningful and insightful reportsย to help guide data-driven decision-making to identify patterns of fraud, user cloning, and spam, and further provide actionable insights;
- (For lead) Mentor engineers and contribute to setting best practices for data processing across the organization;
๐พ What you will need
- 3+ years of hands-on experience in data engineering, focusing on real-time and batch processing systems;
- Strong experience with Redis, Kafka, column-based databases, and real-time data streaming;
- Proficient in Python, Bash shell and familiar with ELK stack, ScyllaDB or similar technologies;
- Ability to create clear and actionable reports from complex datasets;
- Strong communication skills, with the ability to advocate for improvements and speak up when proposing better solutions;
- Self-motivated, detail-oriented, and results-driven with strong problem-solving skills;
Preferred Qualifications:
- Proficiency in Rust, gawk, and Java is a plus;
- Experience analyzing large datasets to detect anomalies, fraud, and suspicious behaviors is a big plus;
- Experience working in an on-prem environment.