Lead Big Data Engineer, Java
Durham, NC  / Dover, DE  / Nashville, TN  / Trenton, NJ  / Orlando, FL  / Indianapolis, IN  / Augusta, ME  / Annapolis, MD  / Montpelier, VT  / Bismarck, ND  / Washoe, NV  / Columbia, SC  / Salem, OR  / Austin, TX  / Oklahoma City, OK  / Phoenix, AZ  / Frankfort, KY  / Little Rock, AR  / Richmond, VA  / Pierre, SD  / Santa Fe, NM  / Atlanta, GA  / Chicago, IL  / Providence, RI  / New Orleans, LA  / Salt Lake City, UT  / Concord, NH  / Birmingham, AL  / Columbus, OH  / Denver, CO  / Hartford, CT  / Harrisburg, PA  / Raleigh, NC  / Lincoln, NE ...View All
View Less
Share
Posted 14 days ago
Job Description

The Genesys Cloud Analytics platform is the foundation on which decisions are made that directly impact our customer's experience as well as their customers' experiences. We are a data-driven company, handling tens of millions of events per day to answer questions for both our customers and the business. From new features to enable other development teams, to measuring performance across our customer-base, to offering insights directly to our end-users, we use our terabytes of data to move customer experience forward.

In this role, you'll partner with software engineers, product managers, and data scientists to build and support a variety of analytical big data products. The best person will have a strong engineering background, not shy from the unknown, and will be able to articulate vague requirements into something real. We are a team whose focus is to operationalize big data products and curate high-value datasets for the wider organization as well as to build tools and services to expand the scope of and improve the reliability of the data platform as our usage continues to grow on a daily basis.

Overview of the role

  • Manage a scrum team of software engineers.
  • Develop and deploy highly-available, fault-tolerant software that will help drive improvements towards the features, reliability, performance, and efficiency of the Genesys Cloud Analytics platform.
  • Actively review code, mentor, and provide peer feedback.
  • Engineer efficient, adaptable and scalable architecture for all stages of data lifecycle (ingest, streaming, structured and unstructured storage, search, aggregation) in support of a variety of data applications.
  • Build abstractions and re-usable developer tooling to allow other engineers to quickly build streaming/batch self-service pipelines.
  • Build, deploy, maintain, and automate large global deployments in AWS.

Required experience and skills

  • Experience leading and growing a technical team
  • Experience working with Apache Spark
  • Experience as a software developer using Java 8+
  • Experience working in an AWS cloud environment
  • Experience working with data pipeline administration tools (Airflow, etc.)
  • Experience managing large data sets

Technologies we use and practices we hold dear

  • Right tool for the right job over we-always-did-it-this-way.
  • We pick the language and frameworks best suited for specific problems.
  • Ansible for immutable machine images.
  • AWS for cloud infrastructure.
  • Automation for everything. CI/CD, testing, scaling, healing, etc.
  • Hadoop and Spark on EMR for batch processing.
  • Airflow for orchestration.
  • DynamoDB and S3 for query and storage.

Benefits

  • Market competitive salary with an anticipated base compensation range of $134,925 - $224,875. Actual salaries will vary depending on a candidate's experience, qualifications, skills, and location.
  • Medical, Dental, and Vision Insurance
  • Telehealth coverage
  • Flexible work schedules and work from home opportunities
  • Development and career growth opportunities
  • Open Time Off
  • 401(k) matching program
  • Adoption Assistance
  • Infertility treatments

See more Genesys benefits information at

 

Job Summary
Start Date
As soon as possible
Employment Term and Type
Regular, Full Time
Required Experience
Open
Email this Job to Yourself or a Friend
Indicates required fields