Location: Ft. Meade, Maryland, United States (Full-Time)
CANDIDATES ARE REQUIRED TO HAVE AN ACTIVE TS/SCI FULL SCOPE WITH POLYGRAPH TO BE CONSIDERED FOR THE POSITION.
- At least three (3) years experience managing and monitoring large Hadoop clusters (>1,000 nodes). At least three (3) years experience writing software scripts using scripting languages such as Perl, Python, or Ruby for software automation.
- At least three (3) years experience in the planning, design, development, implementation and technical support of multi-platform, multi-system networks, including the composed CISCO and UNIX or LINUX-based hardware platforms, to encompass diagnosing network performance shortcomings and designing and implementing performance improvements.
- Must demonstrated the ability to work with OpenSource (NoSQL) products that support highly distributed, massively parallel computation needs such as Hbase, CloudBase/Acumulo, Big Table, etc.
- Must demonstrated work experience with the Hadoop Distributed File System (HDFS).
- Must have technical experience and knowledge of peer-to-peer distributed storage networks, peer-to-peer routing and application messaging frameworks.
- Must be able to demonstrated knowledge of analytical needs and requirements, query syntax, data flows, and traffic manipulation.
- Hadoop/Cloud Systems Administrator Certification.
- Significant experience provisioning and sustaining network infrastructures and have experience developing, operations, and managing networks required to operate in a secure PKI, IPSEC, or VPN enabled environment.
NOTE: A degree in Communications, Computer Science, Mathematics, Accounting, Information Systems, Program Management, or similar degree will be considered as a technical field.