Redfin -
Senior Software Developer Seattle, USA
Guided policy and implemented tools and services over a long tenure usually relating to compute, virtualization or cloud infrastructure.
Led the adoption and deployment of Kubernetes, directly designing several key components of our integration. In addition to championing the effort, I played a lead role in implementing the chosen security model, multi-region redundancy, integration with existing cloud and on-premise networks and several pieces of custom automation.
Since no off-the-shelf tool was available at the time, I created an Google OIDC client for kubectl, the kubernetes command line client. This became how all Redfin engineers authenticated their tooling to the kubernetes clusters. It used a variation of Google OAuth meant for smart TVs and other devices without an identified, integrated keyboard or browser. Since our engineers used a variety of devices, OSes and browsers, this best matched our use case.
Several "buy" options for layering a deployment pipeline on top of vanilla kubernetes assume a static layout of "namespaces". Namespaces are a central feature of kubernetes which control access to secrets and hardware resources: a static namespace layout introduces unnecessary friction when the applications within are more dynamic.
To create the automation enabling a more dynamic, flexible namespace model we chose to build several "controllers" (robots running within the clusters) using metacontroller. These controllers validate and responded to certain requests by creating configured namespaces and turning control of them over to the requester. The right to make these requests is protected with the native authorization rules, enabling controlled, dynamic dispersal of cluster resources.
Member of a small tiger team responsible for "lifting and shifting" our aging monolith out of a legacy data center and into AWS. Migrated components needed to be modified to seamlessly support both environments over a period of months.
Among others components I was responsible for migrating:
- web crawler traffic while preserving mission-critical SEO health metrics
- solr indexes driving the flagship search bar
- the daily release train
- in-house git hosting and related gitops workflows
In each case above I found ways to fully migrate before the big day.
After the big day, I led a data-driven capacity-planning reassessment of our new cloud presence based on real traffic. I identified and implemented three separate opportunities that in total led to six-digit savings off our monthly cloud bill. This also resulted in an architectural proposal describing roughly twenty specific, actionable ideas to take advantage of cloud features to improve our availability, performance, security and budget posture. This road map is still being implemented, and in particular is expected to lead to further savings of a similar magnitude.
-
Development
- Shell
- Python
- Java
-
Operations
- Kubernetes
- AWS
- Linux
- Networking
- Docker
- Jenkins
Moz -
Senior Developer (Big Data) Seattle, USA
Architected, scaled and maintained web-scale databases, including link indexes of the Internet.
One prominent index is a legacy application written in C++, covering the requirements of crawling, indexing and serving queries over a restful interface. It worked, but it was highly coupled and had poor test coverage, making new feature addition difficult.
To address these issues I created sauce, a C++ dependency injection framework (Google's fruit, which I would recommend in a C++14 world, was not yet published.) Sauce effectively allows a developer to replace a component's dependencies with mocks or stubs during test. This in turn greatly increases the developer's ability to simultaenously achieve good coverage and speed up tests that do not need, e.g. the filesystem or network.
Before joining the Big Data team the focus of my work was the design and implementation of a campaign application, created in Ruby on Rails. In an effort to retain modularity the application was architected as a series of restful webservices, each charged with a different feature of the user's data experience. This strategy was successful, but implied a lot of boilerplate overhead as each new service exposed its particular assortment of database-backed resources.
To ease the burden of exposing new resources, I created a Ruby on Rails library called responder controller. It filled a feature gap in the then-new version 3 of the framework, by providing a concise DSL for articulating the relationships between restful resources and their database model peers.
-
Development
- C++
- Hadoop
- Shell
- Ruby
- Rails
- Python
-
Leadership
- Architecture
- Mentorship
- Adoption
- Recruitment
-
Operations
- Linux
- Provisioning
- Capacity Planning
- Recovery
-
Data Science
- Performance Scaling
- Feature Engineering
- Model Selection
Tech. Distribution Solutions -
Developer Seattle, USA (remote)
Sole developer and adminstrator for a startup in a niche market in retail electronics. Implemented in Ruby on Rails version 2 in a classic three tier model.
-
Development
- Rails
- HTML
- CSS
- Javascript
-
Operations
- Provisioning
- Linux
- Capacity Planning
Business Logic -
Developer Chicago, USA
Implemented and maintained a retirement finance Monte Carlo simulator as a web service.
To enable domain experts to rapidly iterate and to retain BLAS-powered performance, the matrix-smashing aspects were implemented in Matlab. This work was exposed under a restful Tomcat service using a pool of Matlab processes. Each was exposed to Java through a novel interface best described as something between a glorified pipe and a terminal emulator.
Business Logic is now NextCapital.
-
Development
- Java
- Matlab
-
Operations
- Provisioning
- Linux
-
Leadership
- Mentorship
- Recruitment
X by 2 -
Consultant Farmington Hills, MI USA
Designed and implemented web service features as a client-facing consultant in the insurance industry.
-
Development
- Java
- Websphere
-
Client Collaboration
- Engagement
- Screen Mocking
- UML
Sandia National Labs -
Graduate Student Albuquerque, USA
Researched possible L1/2 CPU cache optimizations driven by patterns in concurrent data access.
Heuristics for maximizing cache efficiency by controlling CPU thread affinity were explored. I helped implement benchmark programs.
-
Computer Science
- Parallel Task Scheduling
- L1/2 Cache Efficiency
Rose-Hulman Ventures -
Student Developer Terre Haute, IN USA
Contributed to design and implementation of software components in several incubating projects.
Student developers usually moved between incubating projects on a 3- or 6-month basis. In this way I was exposed to and helped build several diverse projects:
- A truck engine diagnostic tool that used sensor data to predict or diagnose failure
- A real estate web application featuring an (ad-hoc) spatial index
- A chemical engineering desktop application to aid scaling chemical processes from laboratory to processing plant settings.
- A human-resources management web application.
-
Development
- Java
- SQL
- HTML
- Javascript
- CSS
-
Leadership
- Mentorship
- Architecture
-
Operations
- Tooling
- Linux
University of Michigan -
Master's in Computer Science GPA 7.5 of 8
Emphasis on theory of computation, complexity theory and cryptography. Advisor Kevin Compton