Data Engineer

Home > Data Engineer

Location: Hyderabad
Key Responsibilities

  • Design and maintain scalable data pipeline architectures.
  • Build and manage large, complex datasets to meet both functional and non-functional requirements.
  • Implement internal process enhancements—automation, performance optimization, and infrastructure scaling.
  • Develop high-performing ingestion, transformation, and publishing frameworks using Python, PySpark, and AWS big data
  • technologies.
  • Create analytics solutions that leverage data pipelines to drive insights in customer acquisition, operational efficiency, and strategic KPIs.
  • Collaborate with Product, Data, Engineering, Executive, and Design teams to resolve data-related challenges and support their infrastructure needs.
  • Ensure secure and compliant data management across multiple regions, data centers, and AWS environments.
  • Build reusable tools and frameworks enabling data engineers, analysts, and data scientists to innovate efficiently.
    Continuously enhance system functionality by working closely with data and analytics specialists.

Required Experience & Skills

  • 6+ years of overall IT experience with 4+ years building data applications.
  • Strong expertise in SQL, Python, and PySpark with hands-on experience working across relational and NoSQL databases.
  • Proven experience with Databricks for data engineering and pipeline development.
  • Demonstrated ability to design and optimize cloud-based big data architectures and pipelines.
  • Experience conducting root cause analysis on complex datasets and business processes.
  • Strong analytical skills with the ability to work on structured & unstructured datasets.
  • Proficiency in data modeling, metadata management, workload management, and data transformation processes.
  • Strong track record in handling, processing, and extracting value from large-scale datasets.
  • Understanding of structured, semi-structured, data-at-rest, and data-in-motion architectures.
  • Experience collaborating with cross-functional teams in fast-paced environments

Preferred Technologies & Tools

  • Databases: Postgres, Cassandra, SQL & NoSQL systems
  • Workflow/Pipeline Tools: Apache NiFi, AWS Step Functions, Oozie, Azkaban, Luigi, Airflow
  • AWS Services: EC2, EMR, RDS, Redshift
  • Streaming Technologies: AWS DMS, Kinesis, Spark Streaming
  • Programming Languages: Python, Java, C++, Scala

Apply now: hr@techfokes.com

Apply Now Form