The company name “data&mie” simply means “data&me” (me = mie) in the Karelian dialect of Finnish that is spoken in the eastern parts of Finland where I’m from. Although I no longer live there as I moved to Espoo in 2010, I still have a soft spot for the region and its people. The company name celebrates both the dedication to data as a craft and the love of one’s roots.
I’m a data and analytics engineer that specializes in Python, open-source tools (e.g., dbt and Airflow) and the AWS cloud. I’ve also worked with software engineering and infrastructure management during my career. These disciplines have had a huge influence in the way I work and helped me integrate Test-Driven-Development (TDD) and DevOps practices, such as CI/CD and Infra-as-Code, into my development workflow.
As a person, I’m calm, open-minded and determined. The latter two are the driving force behind my creativity. When I come up with new ideas, I take the initiative and see it through. Me becoming a freelancer is an example of this.
I’m also a dog loving person. Being around dogs helps me ground myself and focus on the now instead of what has been and what is to come. The playful way dogs (and also children) approach life is a continuous source of inspiration to me.
MY top skills
In January 2021 I took the leap and started my self-employment journey as a freelancer. Working internationally (EU and US) on data/analytics engineering projects with companies of all sizes.
I was a Senior Data Engineer at Aitomation in 2020. The role included developing and maintaining data lakes and ETL/ELT pipelines for customers in different industries. Here I continued my work with dbt by providing migration, deployment and training services to clients.
For more information on dbt, see dbt deployment and training in My Services.
I was a Full Stack Data Engineer at Synoste from 2015 to 2020. The role included data and software engineering and infrastructure management, among other things. Here I initially started working with dbt.
Here are some example projects from my career.
DATA INFRASTRUCTURE FROM SCRATCH
Design, implementation and training of a data infrastructure from scratch to automate data analytics. The automation helped alleviate bottlenecks in the client’s product development and enabled them to analyze data in much larger quantities than before. With the training the client’s product development team was equipped to continue their analytics efforts independently.
REFACTOR AND OPTIMIZATION OF ANALYTICS
Refactor and optimization of an analytical SQL modeling layer built on top of a data lake. The modeling layer was migrated to dbt during refactoring. Upon completion of the migration, the following improvements to the modeling layer were observed. Firstly, maintainability of the modeling layer was improved by organizing and testing all the analytical SQL models in a single source-of-truth repository. Secondly, system monitorability was enhanced by automated model testing and alerting. Lastly, the modeling layer infrastructure costs were reduced by ~80% by query and materialization optimization.
DATA WAREHOUSE DEPLOYMENT
Deployment of a dbt data warehouse and training of dbt to two data analysts in a client’s data team. The project enabled the client to kick-off their data warehousing efforts with their internal data team having the required skills and knowledge to continue development independently.
ANALYTICS INFRASTRUCTURE MIGRATION
Migration of an analytics infrastructure from Redshift to Snowflake. The project