Experience
Summary
Throughout my career in data science I’ve honed my skills across a variety of domains, demonstrating my ability to adapt and excel in different areas. My journey, though unconventional, has allowed me to integrate traditional data science techniques with broader software and data engineering principles.
I’ve spent a significant portion of my career analysing datasets and crafting robust machine learning models, gaining a solid foundation in both classical data science as well as the more contemporary field of deep learning. In parallel I’ve delved into the realm of data engineering, dedicating substantial effort to constructing scalable data pipelines.
My proficiencies are not confined to data-centric tasks. I have also developed strong software engineering skills, mastering Python and familiarising myself with a range of software patterns. My diverse experiences include designing and implementing RESTful APIs, as well as creating real-time, event-driven services that respond dynamically to real-time events. Finally, my capabilities also extend to cloud infrastructure, having gained experience in deploying and managing these systems, overseeing the complete lifecycle of data science applications. This unique blend of skills, I believe, sets me apart as a versatile professional in the tech sector.
Key Technologies
Over the years I have gained expertise in many tools and technologies. Some highlights are:
- Python and SQL
- Scikit Learn and Tensorflow
- Apache Spark
- Relational databases (e.g. PostgreSQL)
- Non-relational databases (e.g. Redis and Elasticsearch)
- Dagster and Apache Airflow
- AWS and Terraform
- FastAPI, Flask, and Django
Technical Advisor
(February 2018 - Present)
As a Technical Advisor, I specialise in assisting small companies and early-stage startups develop their data strategies and system architectures, and leverage machine learning capabilities to optimize their operations. My experience spans across numerous industries, which have included healthcare, patent law, and sport science. In my role, I not only provide strategic advice but also get involved in development activities when necessary. My goal is to ensure the companies I work with can leverage data and technology effectively to drive their success. Whether it’s building models, advising on system architecture, or identifying innovative applications of machine learning, I bring a holistic and dedicated approach to every project I undertake.
Some examples of projects I have undertaken are:
- Building a custom word embedding model (word2vec, built using Gensim) to enable better matching of patent documents. This had multiple use cases, including patent search and discovery, identifying patent infringements, and patent classification.
- Using large language models (GPT-4) to automate the initial stages of the interview process.
- Designing automatically adjusting training plans for triathletes.
DeepL
Staff Data Scientist (February 2023 - Present)
In my role at DeepL, I primarily focus on the development of Python- and SQL-based data pipelines to process both commercial and platform usage data, transforming this raw input into structured time-series data. Utilising our data warehouse solution, Clickhouse, these pipelines are crucial in enhancing our data management and analytic capabilities.
A key project I am currently leading involves creating a Slack bot powered by the large language model (LLM) GPT-4. This bot is designed to interpret analytics queries from business teams, generate appropriate SQL queries based on our data warehouse schema, and present the findings in natural language. Once deployed, this tool is expected to significantly reduce the volume of ad-hoc stakeholder queries.
In addition to these technical contributions, I have revamped the data science candidate interview process to make it more rigorous and effective. I also champion the implementation of best practices in data engineering across our team, promoting the use of the medallion architecture and industry standard tools like DBT and Airflow.
Infogrid
Senior Data Scientist (February 2021 - February 2023)
At Infogrid, my role as Senior Data Scientist involved a strong focus on developing practical applications using data science and data engineering combined with leadership in technical projects. A significant part of this role was the development and deployment of Long Short-Term Memory (LSTM) models. These models processed time-series data from temperature sensors to predict state changes effectively, such as detecting occupancy at office desks and monitoring water flow through pipes. This work was crucial for enhancing building management systems, enabling automated, real-time decisions that improved operational efficiency and represented Infogrid’s primary source of revenue.
In addition to model development, I led the creation of several Python-based core services utilising a suite of tools including FastAPI, Redis, TimescaleDB, Amazon SQS, Amazon SNS, and DynamoDB. These efforts were aimed at building scalable, event-driven applications and advancing the adoption of MLOps practices within the organization. These projects supported the infrastructure needed to deploy sophisticated models and applications, ensuring robustness and scalability across our data-driven initiatives.
Opensignal
Senior Data Scientist (January 2018 - January 2021)
As a Senior Data Scientist at Opensignal, I focused on developing new metrics to measure mobile network performance with statistical rigor. Our terabytes of crowd-sourced mobile sensor and location data served as the foundation for these metrics. Beyond the development of these metrics, I also had a crucial role in implementing them in scalable data pipelines. I used Python, PySpark, and Airflow, deployed on AWS infrastructure, to ensure scalability and efficiency.
CognitionX
Data Scientist (December 2016 - January 2018)
At CognitionX, my role was less traditional for a Data Scientist - I led the development team. My responsibilities were wide-ranging, encompassing technical leadership in data science and software engineering, product management, and team leadership. On the technical front, I was involved in cloud platform architecture design and administration. I developed a web portal for finding resources, companies, and people related to artificial intelligence. I also developed a system for collecting award nominations and votes for the CogX 2017 conference. The technologies I used in this position included AWS, Django, Elasticsearch, Flask, Neo4j, PostgreSQL, and Python.
Big Data Partnership
Data Scientist (October 2014 - December 2016)
My first position in the tech industry was as a Data Scientist at Big Data Partnership. Here, I worked in a range of sectors, including ad-tech, healthcare, and aviation. I used a variety of machine learnin methods, from classical supervised methods like logistic regression, SVMs, and random forests to unsupervised learning methods like k-means, DBSCAN, and PCA. I also got the opportunity to work with survival analysis, Markov models, and deep learning for computer vision. In addition to my technical duties, I provided SME support to the sales team and delivered data science technical training. I developed a training course titled “Introduction to Data Science in a Big Data World”, which generated revenue in excess of £250,000 in its first six months. This position helped me learn Python and SQL to a profession level and gain familiarity with a wide range of tools and technologies.
Atomic Weapons Establishment
Research Scientist (September 2008 - September 2010)
In this position I used finite element methods and numerical analysis to model the propagation of high pressure shock waves in piezoelectric materials initiated by high velocity impacts, with the aim of optimising the electrical output of the system. In addition, I was enrolled on the AWE graduate training programme, involving a range of complementary skills training, such as project management, and communication skills.
Education
PhD: Experimentally Verified Reduced Models of Neocortical Pyramidal Cells
University of Warwick (October 2011 - September 2014)
My PhD focused on using simplified models of neurons (nerve cells) to delve into two important areas in the field of computational neuroscience: incorporating observed variability into models of brain networks and improving our understanding of how neurons recover after transmitting a signal.
Firstly, although brain network models often include different classes of neurons, they usually overlook the variability within these classes. To address this, I studied the reactions of a specific type of neuron to different stimuli and measured their electrical properties. This research resulted in an algorithm that can generate populations of a simplified type of neuron model, while maintaining the observed variability. This tool could help in further exploring variability in brain network models.
Secondly, I examined the dynamic nature of the neuron’s “spike threshold” - the level of stimulation needed to make a neuron fire a signal. After a neuron fires, the threshold jumps up and then slowly returns to its baseline level. I found that a simple model accurately captured this behavior. In addition, I also discovered a two-variable model could be simplified further, without losing accuracy, when certain properties of the neuron were similar. However, I also found the observed threshold dynamics could not be fully explained by one previously suggested mechanism.
MSc: Mathematical Biology and Biophysical Chemistry
University of Warwick (Distinction, October 2010 - September 2011)
MMath: Mathematics
University of Oxford (Upper second class, October 2004 - July 2008)