Job Responsibilities:
• 5+ years of experience using all Hadoop tools – Spark, HIVE, HDFS
• 3+ yrs experience Talend
• Experience in HIVE, Spark (Python preferred)
• Proficiency with SDLC processes
• Solid knowledge of programming languages, application server, database server and enterprise architecture
• Knowledge of JavaScript, NodeJS, Angular J
• Cloud Development with AWS experience preferred
• Knowledge of AWS EMR, S3, Lambda, ECS/Docker, EB, API Gateway, Athena, QuickSight, etc.
• Experience in designing big data lake/warehouse for data integration from enterprise wide applications/systems, Cloud private/hybrid/public, Big Data (Spark, Cloudera,Hortonworks) ecosystems
• Knowledge and understanding of ETL design and data processing mechanisms
• Write unit tests to identify malfunctions maintaining code coverage
• Participating in code reviews and responding to reviews on your code
• Gather specific requirements and suggest solutions
• Good understanding of all AWS Components and Services