| Home |About |CV |Projects |Publications |
When I joined Amazon in 2019, I started my AWS Cloud journey. On my first day at Amazon, I was assigned the task of creating an automated microservice that spins up new EC2 fleets and autoscales them horizontally based on traffic loads. These servers would then be used for customers to play video games in the cloud. I dived into my new assignment with enthusiasm, reading documentation, tutorials, and asking advice from team members. Despite starting with no AWS cloud knowledge, I was able to push out my first code commit for review in enough time to meet my first two week sprint deadline.
As part of the Amazon Luna team (video games in the cloud), I went on to work on several major projects including a service for game developers to automatically upload their games into the cloud (Lambda, Step Functions, and DynamoDB), game session creation and monitoring in partnership with the UI team, fleet creation and management of game servers, traffic prediction with machine learning to smartly autoscale fleets of servers (SageMaker), a service simulation system made in Python to test new ML models before production deployment, and DevOps console work in Apollo, GraphQL, React, and Node.js.
After two years on the Luna team, I joined the Alexa Accessibility team, which provided closed captioning, computer vision, and non-verbal interaction systems to disabled adults and children. My focus on this team was again AWS, migrating from old corporate Alexa servers into AWS microservices, Lambda, CloudWatch, and Alexa Developer first-party apps.
Before my journey into cloud computing, I spent several years trying to make the world a better place by doing research to improve the accessibility of visual educational materials and technologies for children who are completely blind or have low vision. Under the advisement of Dr. Eelke Folmer, at the University of Nevada, I produced several publications and research projects concerning the accessibility of geometry, diagrams, and maps on touchscreens for blind children. My research publications in top conferences and journals can be seen on my Google Scholar page here and summarized in more detail on my CV here.
After my PhD committee approved my dissertation ("Making Spatial Information Accessible on Touchscreens for Users Who Are Blind"), I then moved on to accessibility for adults in a business setting at Microsoft Office 365 core team experiences. I designed and implemented several improvements to the accessibility of Microsoft Office apps, increasing usability, reliability, and satisfaction for customers with disabilities. I became quite familiar with Microsoft's User Interface Automation (UIA) API, general Win32 APIs, UIA Notifications, and C++ during this time.
My current goal is to expand my knowledge of the training and inference of LLMs in the AWS cloud, keeping abreast of the latest bleeding edge research and to further understand the mathematical underpinnings of LLMs (linear algebra, matrices, calculus, and transformer self-attention, to name a few). To this goal, I completed several graduate-level courses in Machine Learning, Deep Learning, Natural Language Processing, Reinforcement Learning, and Generative AI through the University of Texas, Austin. These courses have enabled me to learn the fundamentals of AI while also learning the practical steps in training, inference, and the technical aspects of hosting small and large-scale models in the AWS cloud.