Skip to content

Instantly share code, notes, and snippets.

@seansummers
Last active October 31, 2024 21:12
Show Gist options
  • Save seansummers/21434fe0a6c8789fd5dae4c80bc4cc42 to your computer and use it in GitHub Desktop.
Save seansummers/21434fe0a6c8789fd5dae4c80bc4cc42 to your computer and use it in GitHub Desktop.
Title: Customer Experience Operations Engineer
Position Overview:
Our Technical Support Department is recognized globally for delivering premier support. Support is of substantial importance to the business of all WebPros brands. The CX Operations Engineer facilitates and enables front-line Support staff through the development, administration, and maintenance of essential back-end systems. This role additionally performs a variety of complex tasks utilizing discretion and independent judgment to support high impact WebPros customers, including consulting with customers and development to determine system functional specifications, analyzing and recommending system changes, coaching and training members of Technical Support, and will collaborate with the rest of the CX Operations team to ensure any issues are resolved as effectively as possible. The CX Operations Engineer additionally manages internal support systems & scripts by analysis, recommendations, and modifications of these tools.
Responsibilites:
Responsible for short and long term analysis and studies to proactively maintain support systems and execute preventative maintenance plans. Monitors system status and reacts to issues as they arise
Works with and provides recommendations to CX Management to ensure service levels and policies are being met, including assisting in establishing product and process improvement plans to reduce support effort, and increase product availability and scalability
Own and optimize CX technology and related systems process logic and administration for the organization
Collaborate cross-functionally with CX, Product, and Engineering teams and lead the evaluation, selection, and implementation of new platforms, tools, and technologies to enhance Customer Experience offerings
Lead in critical incidents and outage situations, working with Engineering, CX, Site Reliability and other teams to resolution.
Document scenarios for post-mortems and work cross-functionally to resolve and prevent outages
Develop playbooks and documentation of CX tools configurations, integrations, and workflows
Seeks and utilizes opportunities to gain and share knowledge and experience with new and existing technologies
Contributes to and makes recommendations for Company and Department documentation efforts
Adheres to the policies and procedures of the company
Maintains a core daily schedule to ensure efficient operations
Exemplifies the cPanel Core Values of Inclusion, Innovation, Trust, Collaboration, and Fun
Qualifications:
Expert/industry-leading experience, knowledge and sharp troubleshooting/critical-thinking mindset skills noted in the technical requirements
5+ years of Linux/*nix/BSD system administration experience
3+ years hosting industry experience
2+ years of Python development experience, preferably with matching PostgreSQL experience
Advanced knowledge of Linux and general cybersecurity concepts
Solid understanding of API calls (HTTP standards, network requests and responses, JSON payloads) and server-client architecture
Advanced knowledge of scripting and development focused on system administration including automation and resiliency
Additional consideration for knowledge in Zendesk ticket system administration
Superior written communication skills required; able to communicate effectively in all written work, utilizing proper grammar, spelling and punctuation
Excellent oral communication skills required; to speak clearly and effectively in positive or negative situations with employees and in group settings
Able to motivate, coach, mentor, and develop relationships quickly with all level of employees in a professional and effective manner
Able to gather data, facts and impressions from a variety of sources about prospective and current employees; seeks knowledge about policies, rules, laws, procedures or practices; managing the data flow; classifying and organizing information for use in decision-making and monitoring
Demonstrates the highest level of ethical behavior and displays integrity and ethics in handling confidential information
Strong organizational skills, able to prioritize and to use time effectively in an unsupervised fashion, pursuing other activities when regular duties have reduced volume; able to complete projects in a timely manner
Demonstrates good judgment and strong skills in achieving maximum organizational performance and efficiencies
Able to follow through on assignments in order to achieve company and departmental goals
Able to adapt to changes in the work environment, manage competing demands and deal with frequent changes, delays, or unexpected events
Able to apply critical thinking and troubleshooting techniques to efficiently resolve issues, and able to apply independent judgment on a regular basis in making decisions without supervision
Advanced technical skills including:
Mastery of web server related technologies including Apache, Nginx, Django, UWSGI, MySQL, PostgreSQL, PHP, Exim and more
Strong understanding of CentOS/RHEL/AlmaLinux/Ubuntu systems including package management tools (Yum, dnf, RPM, apt-get), shell commands and scripting, and Linux security concepts.
Strong understanding of cPanel software, configuration, and implementation
Strong understanding of automation and deployment tools, such as Jenkins, Ansible, HashiCorp Stack, Docker, GitLab (and general git), GitHub, OpenStack
Strong understanding of Python/PHP/Javascript
Travel Occasional local and overnight travel required, including some interstate travel by air.
Data Engineer
ABOUT Healthcare
St. Paul, MN – Hybrid or Remote
Position Overview: Services as a technical expert on our data layer. Discusses customer needs, maintains, and updates databases, warehouse or marts. Tunes and ensures optimal database and server performance. Assists with reporting and analytics to meet business intelligence requirements and ensures overall integrity, quality, and related system maintenance.
Responsibilities:
• Discusses needs, operations and data requirements with staff or customers and outlines system capabilities and approaches to pull, maintain and analyze system data.
• Work with other data engineers to design and maintain scalable data models, tables, views and ETL pipelines
• Help design, build, and improve the infrastructure for ingesting, storing, securing and transforming data at a scale
• Re-assess current data environment and provide solutions, improving how we manage and use data for reporting.
• Clean up and establish best practices for developing new data tables and views. Simplify and build a design that is scalable, improving speed to execute and deliver reporting
• Help design and build systems to monitor and analyze data, responsible to create all data tables & views required for front end reporting
• Provide technology ownership for data solutions for projects that the team has been tasked with.
• Work with a cross functional team of business analysts, architects, engineers, data analysts to formulate technical requirements.
• Design and build data pipelines from various data sources to a target data warehouse using batch data load strategies utilizing cloud technologies.
• Conceptualizing and generating infrastructure that allows data to be accessed and analyzed effectively.
• Documenting database designs that include data models, metadata, ETL specifications and process flows for business data project integrations.
• Perform periodic code reviews and test plans to ensure data quality and integrity.
• Provide input into strategies that drive the team forward with delivery of business value and technical acumen.
• Execute proof of concepts, where appropriate, to help improve our technical processes.
• Provides analysis, interpretation and counsel regarding the application and usage of systems, business intelligence and reporting to improve policies, programs, and practices.
• Provides research and feedback to resolve management and customer questions and requirements; assists with receiving customer feedback and coordinating resources and responses as required.
• Analyzes and reviews operations, results, feedback, and related information on an ongoing to as needed basis to determine trends, draw conclusions, interpret findings, and presents results, proposals, and recommendations to management.
• Ensures the accuracy of operational databases, reports, and related details through audits, queries, and operational reviews; works with teams to resolve discrepancies.
• Interprets and applies department policies and procedures and assists with applicable laws, rules, and regulations; receives guidance within these areas as needed.
• Contributes to the efficiency and effectiveness of the department's service to its customers by offering suggestions and directing or participating as an active member of a work team.
• Performs other duties as assigned
Required Knowledge:
• Bachelor’s Degree in Information Technology or related field and 5 years of related experience; or equivalent education and experience.
• 4+ years of experience in managing data/databases (Proficient in SQL)
• 4+ years of experience in translating business requirements into technical data solutions on a large scale.
• "Can Do" attitude
• Healthcare (HL7) Data experience
• Steaming experience (Kafka)
• AWS Firehose to the Data Lake
• Research and troubleshoot potential issues presented by stakeholders within the data ecosystem.
• Experience with Data Modeling, Data warehousing
• Strong analytical and interpersonal skills.
• Ability and willingness to work as a team
• Enthusiastic, highly motivated and ability to learn quickly.
• Able to work through ambiguity in a fast-paced, dynamically changing business environment.
• Ability to manage multiple tasks at the same time with minimal supervision.
• Advanced principles, practices and techniques of managing data , databases and system analytics.
• Specialized understanding of data engineering, reporting and management.
• Understanding of the administration and oversight of data, system and business intelligence programs, policies, and procedures.
• Various methods to identify and resolve analytical problems, questions, and concerns.
• Basic methods and approaches to analyze and improve business operations.
• Understanding of applicable laws, codes, and regulations.
• Computer applications and systems related to the work.
• Principles and practices to serving as an effective project team member.
• Methods to communicate with staff, coworkers, and customers to ensure safe, effective, and appropriate operations.
• Correct business English, including spelling, grammar, and punctuation.
Required Skills:
• Building and maintaining ETL’s, ELT’s, marts, tables, views
• Performing advanced data engineering duties in a variety of assigned areas.
• Overseeing and administering business intelligence and data analytical systems.
• Using standard, customized, and complex data analytics tools.
• Training others in policies and procedures related to the work.
• Identifying, documenting, and reporting on system and data administration.
• Serving as a team member and the development and management of projects.
• Operating in both a team and individual contributor environment.
• Using initiative and independent judgment within established department guidelines.
• Contributing effectively to the accomplishment of team or work unit goals, objectives, and activities.
• Establishing and maintaining effective working relationships with a variety of individuals.

Sean Summers Senior Solutions Architect South Bend, Indiana 1-337-935-0003

Senior Solutions Architect with experience crafting and implementing cutting-edge AWS solutions using Infrastructure as Code (Terraform, CloudFormation). Designed (using Domain-Driven Design methodology), delivered and supported Python Data Quality Engine to various financial clients for reporting and compliance needs across several SQL stores, data mesh and data lake environments. Excel in Python, SQL, and scripting tools, particularly optimizing data and CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins). Mentored engineering teams and played a pivotal role in migrating computing resources to AWS.

SOCIAL PROFILES

SKILLS

★★★★★ Devops ★★★★★ Unix Operating System ★★★★★ Database Design Principles ★★★★★ Data Modeling Techniques ★★★★★ Data Manipulation And Preparation Tools ★★★★★ Python ★★★★★ Data Pipelines ★★★★★ Ci/Cd Pipelines ★★★★★ Data Mesh Architecture ★★★★★ Domain-Driven Design Methodology ★★★★★ Python 1.2.1 to 3.12 ★★★★★ SQL, ETL and Data Warehousing (including Snowflake) ★★★★★ AWS DevOps, Security, Advanced Networking ★★★★★ Containerization, Functions as a Service ★★★★★ Security, Identity Management, Federation

WORK EXPERIENCE

May 2023

AWS Senior Cloud Architect at Global Technology Solutionsmamar

Supported multiple teams in deploying development and production projects into AWS
Standardized all deployments to be Infrastructure as Code, defined in Terraform

Projects:
-   custom Python vector index APIs (and batch embedding ingest)
-   Python-based Large Language Models (LLM) (using Langchain and HuggingFace)
-   Amazon Chime voice, video, and chatbots (using Python Lambdas)
-   Amazon Connect (with Python Lambda integrations)
-   Genesys Cloud CX integration with Amazon Lex/Polly chatbots (using Python Lambda)

October 2022 – May 2023

Data Quality Engine Architect and Delivery Team Lead at [FiServ] Fiserv Technology Services (Apexon contract),

Projects:
-   Air-gapped Python Data Quality Engine implementation, using approved Pydantic and SQLAlchemy libraries, and factored into a SOLID / hexagonal architecture to support future business requirements
-   Used enterprise best practices and security guidelines to build a CI/CD pipeline with the tools available to deliver tagged code to deployment repos (GitLab, GitLab CI, Nexus)
-   Integration with existing homegrown ETL process, utilizing Python and MongoDB, allowing import of the Data Quality Engine as a python library installable with pip
-   Provided on-boarding and process training to on-shore Python developers to deliver the PoC project, and documentation and hands-on training with the off-shore team tasked with day 2+ operations and maintenance

July 2022 – March 2023

Application Architect and Delivery Team Lead at [SVB] Silicon Valley Bank (Apexon Contract),

Projects:
    - Implement a new bank revenue application, using codat.io to load daily financial transactions from corporate borrowers and ML to determine daily remittance against obligations, on a tight timeline
    - Designed a PoC using AWS Step Functions (Python lambdas) to maintain isolated authentication and state for each borrower account link, and Webhooks as a Service using EventBus endpoints to Python consumers (SQS, SNS, and Step Function subscribers)

    - Provided configurations to build and deploy using the enterprise's GitHub repositories, Harness CI/CD, and SonarQube quality gating and review process.
    - Unfortunately, due to technical barriers and business events, this project did not promote to production.

AWS Architect and System Administrator at [OCC] Options Clearing House (Apexon contract), Chicago

Projects:
  • Migrate clearinghouse auction software from C#/Win32 to Python
  • Assisted the deployment team in specification of requirements and provided Infrastructure as Code pull requests to the appropriate GitHub repositories to have assets provisioned in the Rancher managed Kubernetes on-prem cluster via Harness CI/CD
  • Assisted the data management team in proper SQL definitions and Infrastructure as Code provisioning of Postgres instances in the Rancher managed Kubernetes on-prem cluster
  • Provided Infrastructure as Code pull requests to manage and execute the team's data schema as SQL DDL executed by the enterprise CI/CD system and Flyway deployment

Data Quality Engine Architect and Delivery Team Lead at [USAA] United States Automobile Association (Apexon contract), San Antonio

Projects:
  • Designed, coded and delivered a Python Data Quality Engine using enterprise approved OpenShift containers, python (3.6 only), and JFrog XRay managed libraries
  • Built the first Talon Batch deployment into production in the enterprise, using ControlM to execute Python Jobs on the OpenShift cluster with access to both legacy SQL (Netezza) and future-state SQL (Snowflake)
  • Trained a 4 member team for Day 2+ management and maintenance of the engine and handed over the codebase Consulted for the Domnio Data Lab Python Notebook cluster migration to AWS, with a desired future state of replacement with AWS SageMaker

AWS and Linux consultant at [OCC] Options Clearing House (Apexon contract), Chicago

Project:
  • Due to FFIEC requirements, all API traffic between on-prem and AWS is required to be encrypted and authenticated with auditable proof Assisted with the coordination of on-prem Kubernetes configuration of Google Apigee gateway and AWS hosted Apigee gateways, with the goal of proxying all API traffic between sites
  • Provided an nginx implementation as a PoC of the architecture, and assisted in the Ansible configuration of custom Linux images and an air-gapped install and configuration of Google Apigee
  • Provided on call support to the engineering teams to release project blockers in critical sections of the deployment schedule

April 2019 – May 2020

Senior Site Reliability Engineer at WAITR, Lafayette, Louisiana

Projects:
  • Managed the AWS Account constellation, migrating from manual deployments and instance-based MySQL and PostgreSQL stores to AWS Elastic Beanstalk and RDS Aurora MySQL and PostgreSQL
  • Created CI/CD deployment pipelines using AWS CodeBuild, AWS CodePipeline, and CircleCI for GitHub repos providing Infrastructure as Code, data store, and API Swagger (OpenAPI) deployments into AWS API gateway.
  • Refactored monolithic application into AWS ECR hosted containers deployed into AWS Elastic Beanstalk to AMD instances with algorithmic deployment based on traffic and load, providing no-event "stampede days" elasticity through auto provisioning and auto scaling
  • Centralized all logging into AWS CloudWatch Logs and subsequent collection into AWS Kinesis Firehose endpoints for Athena querying of Parquet partitioned data at rest
  • Provided a Python Lambda Step Function PoC to integrate enterprise HR systems with external web-hooks to automate talent acquisition efforts, decreasing the latency between job posting and position fulfillment
  • Created a Python based AWS S3 subscriber system to integrate with Paylocity CSV for payroll information, significantly reducing the cost from the legacy EC2 instance running cron jobs
  • Provided a CloudFormation managed pilot-light deployment and AWS CloudFront proxy for all production traffic, allowing instant in-flight redirection of traffic in the event of disaster or regional failure based on Route53 health checks. This also provided significant savings over legacy non-CDN traffic
  • Negotiated an AWS Enterprise Agreement for custom pricing (requires a minimum of $100,000 MRR to AWS) for the company AWS accounts

April 2018 – April 2019

Senior Cloud Engineer at Trek10, South Bend, Indiana, US

As an AWS Premier Tier Partner, Trek10 is in the top 10% of all AWS partners globally. As a Senior Cloud Engineer on the Professional Services team, I was involved with:
  • Engagements with AWS Professional Services, where AWS provided on-site technical resources that coordinated under a Trek10 engagement contract to deliver our world-class serverless solutions (almost exclusively Python)
  • Provided engineering for several AWS QuickStart templates published by AWS to provide best-practice and PoC deployments of several services, including AWS CloudFormation stackset integration with Jenkins CI/CD, AWS CodePipeline solution for multi-party approval flows, and several Lambda serverless event triggered examples
  • Worked on-site with customer engineering teams and subject matter experts to overcome technical resistance on the Cloud Adoption Journey and provided expertise and oversight on solutions while ensuring the customer's access to relevant technical and educational material (some covered under NDA with AWS)

November 2015 – April 2018

Senior Systems Administrator, Enterprise Systems Unit at Hesburgh Libraries of the University of Notre Dame, Notre Dame, Indiana, USA
  • Lead Architect for the ND CloudFirst Initiative, a 3 year project to migrate 80% of computing resources to AWS Mentored the existing Enterprise Systems Unit and library engineering teams, with the outcome of almost 50% AWS Certification across the departments
  • Infrastructure as Code using Cloudformation and Ansible exclusively for all deployments, as humans are not allowed to taint production accounts
  • Wrote the guidelines and runbooks, with automated CI/CD pipelines implementing deployment in AWS for all library assets published to AWS
  • Hesburgh Libraries' lead representative to the OIT and CloudFirst organizations, participating in several cross-campus initiatives and rollouts of AWS technology, from initial use case, through approval, and building CloudFormation guardrails for automated deployment, with the desired future state of integrating with ServiceNow
  • Used AWS PrivateLink to integrate an AWS SQS queue with the campus Talend deployment, allowing newly issued ID badges to transact business in the library simultaneously with printing (significantly improving on the target SLO of 10 minutes)
  • Used Apache NiFi to integrate with Indiana's legacy ILLiad borrowing system (ILLiad has no API, only a web based UI), allowing instant automated synchronizing between the legacy library catalog system and external Indiana institution's assets, saving almost a week a month in labor costs ongoing

EDUCATION

August 1990 – May 1992

Schreiner University, Kerrville, Texas, USA 150+ credit hours towards Mathematics and Philosophy

National Merit Finalist Scholarship recipient

member of the University's inaugural Honors program cohort

Certifications Held

  • November 2018 – November 2024 AWS Certified Developer - Associate, AWS ID WjFF9DY12JB41N9X
  • July 2017 – February 2021 [DOP] AWS Certified DevOps Engineer - Professional, AWS ID 763XEBHK12VEQR5L
  • May 2018 – May 2020 [ANS] AWS Certified Advanced Networking - Specialty, AWS ID S86FKHE2LF41Q3C4
  • July 2015 – July 2018 Certified Ethical Hacker v8, EC-Council ID ECC26959285350
  • December 2000 – May 2013 [MCSE+I] Microsoft Certified Systems Engineer + Internet, Microsoft ID A010-7856, MCP 276427
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment