






Job Duties:
Develop, create and modify computer applications software and specialized utility programs. Analyze user needs and develop software solutions. Analyze and design databases within an application area. Architect, design and develop full software development life cycle including requirements gathering, design, development and test automation of Bigdata solutions using the Apache Spark, Hadoop, Hive, Scala, Python, Azure Databricks, Azure Data Factory and Azure Webapp Services. Design and perform implementation of data-ingestion and processing pipelines for various business use cases. Develop and implement solutions using Spark Core RDDs, Data frames/Datasets and Streaming. Develop and deploy docker containers and images. Troubleshoot and fix any product issues identified internally or in the production deployments.
Work Location:
Various unanticipated work locations throughout the United States; relocation may be required. Must be willing to relocate.
Minimum Requirements:
Education: Master – Computer Applications or Applied Computer Science (will accept foreign education equivalent)
Experience: One (1) year