We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results

Lead Systems Engineer

Ampcus, Inc
United States, D.C., Washington
1629 K Street Northwest (Show on map)
Nov 16, 2024

Ampcus Inc. is a certified global provider of a broad range of Technology and Business consulting services. We are in search of a highly motivated candidate to join our talented Team.

Job Title: Lead Systems Engineer

Location(s): Washington, DC

We are seeking a Lead Systems Engineer to support the Systems Monitoring initiatives for several SOW's. Responsible for software tool administration for systems and applications monitoring tools. Expertise with at least one of the Monitoring tools like DataDog.

* DataDog Administration experience on Linux platform to instrument Java based applications running on Tomcat Application Server.

* Configuration experience in Infrastructure Monitoring, Network Monitoring and Centralized Logging.

* Or similar Administration experience with ELK Stack - Elasticsearch (search and analytics engine), Logstash (ingest pipeline) and Kibana (visualization and creating dashboards).

* Strong Linux platform (Red Hat) background.

* Automation experience with scripting (Python, Shell, ANSIBLE) preferred.

* Understanding of SSL setup on Linux servers. Installing CA certs etc.

* Experience with Network Monitoring and knowledge on Network components like Switches, Routers, Palo Alto Network utilization SNMP, F5 Load Balancers, WebSeal, Info Blocks, Gigamon, Network Mapping is a plus.

* Working knowledge of other monitoring tools like Big Panda, CloudBeat (Synthetic Monitoring) is desired. These tools are used to monitor applications and business transactions that impact the business and customers, currently.

* Responsibilities include script writing, installing, managing, and maintaining the monitoring tools, as needed, as well as integration with other tools and collaboration with other groups and their tools.

Responsibilites:

* Manages, configures and maintains the Data Dog tool on Linux platform.

* Responsible for Network Monitoring, Infrastructure/Server Monitoring (Linux, Windows, AIX) using Data Dog, Application, SNMP and Log Monitoring.

* Configure centralized logging of all logs from different sources like WebSphere / Tomcat and Client WebServers on AIX servers to Data Dog on Linux. Knowledge of Load Balancers like F5 to route logs to Log server. Handling different types of Log formats.

* Creates required dashboards with data visualization in Data Dog.

* Manages, configures and maintains the DataDog APM tool on Linux platform.

* Responsible for Java Applications' instrumentation with Data Dog, set up health rules and fine tune monitoring in Data Dog.

* Setup End User Monitoring / Browser Real User Monitoring of Data Dog for applications, using Java script injection.

* Creates Selenium scripts to monitor business transactions using CloudBeat's Synthetic Monitoring.

* Provides support to all significant production issues. Activities may include gathering information from a wide variety of sources across all platforms to analyze for correlations, identifying specific performance causes, recommending a variety of possible solutions to remedy issue and issue reports with key findings and next steps.

* Creates documentation to support the management and maintenance of Data Dog / Data Dog tools. Provides training on tools and the associated processes and procedures.

* Analyzes tool data and usage. Communicates weekly with management verbally and via written detailed status reports regarding potential problems and concerns.

* Works with different Systems and Application Architecture teams to ensure that systems monitoring requirements are addressed early in the development process. Coordinates with project teams to ensure that monitoring of new applications is available before release for production.

* Assists in reviewing and analyzing business & system requirements and specifications for systems monitoring tool protocols and future tool usage.

Competencies:

1. Effective organizational, interpersonal, analytical, communications skills and Hands on technical experience

2. Self-motivated, adaptable to change, forward-thinking

3. Must be able to prioritize and manage time under tight deadlines and demonstrate initiative in problem-solving.

4. Enthusiasm to engage in continuous learning, internal drive, intellectual curiosity, ability to learn, and desire to help the customers succeed

5. Strong technical skills and ability to work proactively

6. Comfortable working under Project Manager supervision

Specific Required Skills:

* 5-8 years strong IT experience and good working knowledge of a variety of technology platforms in a distributed environment including: Microsoft systems (e.g. Windows 2012 and 2016 Server, Active Directory, Exchange, SharePoint), Linux/Unix, VMWare, SQL Server, database architectures, TCP/IP, VPNs, Mainframe, LAN/WAN technologies and architectures

* A minimum of 3 years hands-on experience installing, integrating, managing and maintaining monitoring tools like Data Dog administration and support.

* Or similar Log Management experience with ELK Stack - Elasticsearch (search and analytics engine), Logstash (ingest pipeline), and Kibana (visualization and creating dashboards)

* Experience in writing Shell, Python, Selenium, VuGen scripts

* Experience with SSL certs, encryption methods on Linux

* Experience in developing and implementing systems monitoring and alerting strategies in diverse, large-scale environments

* Experience developing and documenting processes, procedures, and policies for tool usage and integration

* Author tool maintenance and training documentation as well as support requests for training on tool usage

* Knowledge and experience with configuring alerts, dashboards and ad-hoc reports

* Strong understanding of service level management (SLAs, SLRs, etc.)

* Determine and document tool backup and recovery procedures

* Experience with data management tools and databases (e.g., DB2, SQL -familiarity desired)

* Experience in systems and Java applications troubleshooting using monitoring tools like DataDog

* Understanding and experience with both waterfall and agile Software Development Life Cycles (SDLC)

* Bachelor of Science in Computer Science or related field (i.e., Engineering, Applied Science, Math, etc.) or equivalent experience.

* Experience with SAFe agile methodologies

Licenses/Certifications:



  • ITIL Foundations v3 within 180 Days Pref
  • SAFe Certification
  • Projected Level of Effort



Ampcus is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identify, national origin, age, protected veterans or individuals with disabilities.

Applied = 0

(web-5584d87848-7ccxh)