freiberufler Senior Data Enginner auf freelance.de

Senior Data Enginner

zuletzt online vor wenigen Stunden
  • 50€/Stunde
  • 75000 Tuzla
  • auf Anfrage
  • en  |  bs  |  de
  • 24.03.2026
  • Contract ready

Kurzvorstellung

Results-oriented Data Engineer with 5 years of experience specializing in designing and optimizing data pipelines, leveraging Redshift and Snowflake, Python, SQL, and cloud-based solutions.

Geschäftsdaten

 Freiberuflich
 Steuernummer bekannt
 Berufshaftpflichtversicherung aktiv

Qualifikationen

  • Python
  • Snowflake
  • SQL
  • Amazon Web Services (AWS)
  • apache airflow
  • coalesce
  • Data Engineer6 J.
  • dbt
  • ETL
  • Tableau

Projekt‐ & Berufserfahrung

Senior Data Engineer (Festanstellung)
Kundenname anonymisiert, Tuzla
8/2024 – offen (1 Jahr, 8 Monate)
Banken
Tätigkeitszeitraum

8/2024 – offen

Tätigkeitsbeschreibung

Strong understanding of Coalesce’s architecture and core principles, including its metadata-driven approach and node-based interface for declarative data transformations
Proficient in working with YAML template structures and custom configurations to define node behavior, parameterization, and transformations across development environments
Hands-on experience designing and managing various Coalesce node types (Source, Transform, Stage, Snapshot, etc.), enabling reusable, modular, and version-controlled data pipelines
Familiar with Coalesce’s key features such as dynamic SQL generation, data lineage tracking, and modular design that enables scalable development and seamless integration with cloud data platforms like Snowflake
Designed and implemented data transformation workflows in DBT to model, cleanse, and optimize data for analytics in Snowflake
Developed modular and reusable DBT models to ensure efficient data transformation, version control, and documentation
Optimized Snowflake queries and tables by implementing clustering, partitioning, and warehouse scaling strategies for performance improvement
Integrated DBT with CI/CD pipelines using Git, dbt Cloud, and Snowflake to enable automated testing and deployment of data models
Monitored and troubleshooted DBT runs and Snowflake performance using query profiling, logging, and materialized views to ensure efficiency
Built parameterized Coalesce pipelines that ingest from Snowflake internal stages (e.g., @INTERNAL_STAGE/...) and standardize data into curated tables using reusable YAML metadata
Managed version control with Git: feature branches, commits and tagged releases to keep lineage and configuration changes auditable
Deployed by promoting Git branches from one Coalesce workspace (Dev) to another (Prod) via Coalesce’s Git integration for consistent code and metadata across environments

Eingesetzte Qualifikationen

Data Engineer

Data Engineer (Festanstellung)
Kundenname anonymisiert, Tuzla
9/2023 – 7/2024 (11 Monate)
Medienbranche
Tätigkeitszeitraum

9/2023 – 7/2024

Tätigkeitsbeschreibung

Migration multiple repos into one monolithic repo
Migrating from Azure Synapse Analytics to Databricks involves transferring and adapting data, workloads, and queries to the Databricks platform while ensuring compatibility, optimizing performance, and maintaining data integrity
Implemented and maintained an efficient GitHub pipeline to streamline the software development process, ensuring seamless integration, automated testing, and continuous deployment for project enhancements.
Developing new architecture for existing process

Eingesetzte Qualifikationen

Data Engineer

Data Engineer (Festanstellung)
Kundenname anonymisiert, Tuzla
12/2019 – 9/2023 (3 Jahre, 10 Monate)
Gesundheitswesen
Tätigkeitszeitraum

12/2019 – 9/2023

Tätigkeitsbeschreibung

In my data modeling experience, I've adeptly designed relational database schemas, defining entities, attributes, and relationships to ensure data integrity and optimize performance. I've also implemented data modeling best practices, such as normalization and denormalization, to support efficient data manipulation and analysis within various projects
Implemented Apache Airflow by creating and configuring an instance tailored to project requirements, utilizing various operators such as BashOperator or PythonOperator. Enhanced workflow efficiency by orchestrating complex data processes and integrating with external systems
Utilized Python programming for data manipulation, analysis, and automation, enabling efficient data processing and insights generation
Designed and optimized data warehousing solutions using Redshift and Snowflake, ensuring scalable and high-performance data storage and retrieval
Working on migration from AWS Redshift to Snowflake warehouse
Utilized Snowflake's innovative features, Snowpipe and Time Travel, to enhance data processing efficiency and enable seamless temporal analysis in a dynamic and collaborative environment
Implemented transformation framework to transform raw data from different data soruces to datawarehouse using Python
Implemented data validation techniques and quality assurance processes, ensuring data accuracy, completeness, and consistency using Python
Creating and migrating transformation and validation frameworks (in house tools) from a dedicated server to serverless using AWS microservices such as ECS, ECR, Fargate, Lambda, Glue and Cloudwatch
Leveraged Terraform for infrastructure as code, automating the provisioning and management of data engineering environments and resources. Using terraform to create EC2 instance and install Neo4j on this instance
Utilized S3 for scalable and cost-effective data storage, implementing appropriate partitioning strategies for optimized data retrieval
Using AWS S3 for large-capacity, low-cost file storage in one specific geographical region, because the storage and bandwidth costs are quite low. Retrieving historical data from S3, when disaster happened. Using Python boto3 and psycopg2 libraries to pull data from S3 and to import in Redshift
Created interactive dashboards and reports using BI tools such as Domo, Tableau, and MicroStrategy, enabling stakeholders to visualize and analyze data effectively
Implemented Data Governance practices using Atlan, ensuring metadata management, data lineage tracking, and adherence to data governance policies
Leveraged data masking techniques with ALTR to protect sensitive data and ensure compliance with privacy regulations
Created APIs for data integration and consumption, enabling seamless data exchange between systems and applications
Utilized HTTP methods (GET, POST, PUT, DELETE) and status codes to handle requests and responses effectively. Implemented CRUD (Create, Read, Update, Delete) operations for various resources through REST endpoints. Conducted API testing using tools like Postman or Swagger to ensure functionality, reliability, and compliance with specifications
Using BASH (shell scripting) to create different commands to check is previous process finished, is it finished successfully, how many files have been processed, ...
Collaborated with cross-functional teams to understand business requirements, translate them into scalable data solutions, and deliver projects within timelines

Eingesetzte Qualifikationen

Data Engineer

Zertifikate

Coalesce Fundamentals
Coalesce
2025
Snowflake SnowPro Advanced Architect (ARA-C01)
Snowflake
2025
Snowflake SnowPro Core (COF-C02)
Snowflake
2024

Ausbildung

Bachelor Mechanical Engineering
Mechanical Engineer
2016
Tuzla

Über mich

Skilled in transforming and validating data to deliver accurate insights and reliable reporting with modern BI tools. Proficient in S3 for scalable storage/retrieval and experienced with Snowflake across ingestion, transformation, security, governance and performance. I build schema-aware ingestion from S3 (Snowpipe, COPY INTO) and Snowflake internal stages, apply file/row-level validations and log metadata (file name, load date, checksum) for full traceability.

For transformations, I orchestrate modular SQL/Python in Snowflake Notebooks, parameterize runs and document rules inline. I design pipelines to be idempotent and incremental (MERGE patterns, task scheduling, dependency control) to reduce compute and shorten SLAs. I implement surrogate keys, conformed dimensions and audit columns for reproducible analytics.

Security/governance are first-class: least-privilege RBAC with role hierarchies by environment, separation of duties, dynamic data masking and row access policies where needed using parameterized, reusable policies that protect sensitive data without blocking analysis. I continuously tune performance and cost by right-sizing warehouses, analyzing query profiles and improving pruning via clustering and sensible models. I also monitor consumption to prevent runaway costs and keep latency predictable as data grows.

My validation practice combines automated checks and clear reporting: type/nullable rules, referential integrity, and reconciliations after every load, with DQ dashboards and actionable error logs that speed triage and build stakeholder trust.

Beyond Snowflake, I use the Coalesce platform to model pipelines as metadata-driven graphs that generate consistent SQL and auto-document lineage. I standardize with reusable node templates, parameters and macros to reduce code drift and accelerate onboarding. All work is version-controlled end-to-end: feature branches, peer-reviewed PRs, tagged releases and deployment between Dev/Test/Prod via Coalesce’s Git integration so tables, columns and logic move together in an auditable, repeatable way.

On the analytics side, I shape clean, conformed datasets and semantic layers aligned to business terms, set refresh cadences to stakeholder needs and publish data dictionaries/usage notes to enable fast, self-service insight. I collaborate closely with engineers, analysts and product owners, communicating trade-offs early and iterating pragmatically to meet real-world constraints while keeping designs extensible.

In short, my experience spans the modern data lifecycle: landing from S3/internal stages, transforming in Snowflake Notebooks and Coalesce, enforcing RBAC/masking, validating quality, deploying with Git across environments and serving trusted data to BI tools—turning raw data into measurable business outcomes.

Weitere Kenntnisse

In the past year I got three certifications: Snowflake SnowPro Core (COF-C02), Snowflake Advanced Architect (ARA-C01) and Coalesce Fundamentals. These credentials validate my architectural depth and hands-on proficiency across Snowflake and Coalesce, strengthening the quality and reliability of the solutions I deliver.
Currently I'm preparing myself for Snowflake Advanced Data Engineer (DEA-C02) certificate.

Persönliche Daten

Sprache
  • Englisch (Muttersprache)
  • Bosnisch (Muttersprache)
  • Deutsch (Grundkenntnisse)
Reisebereitschaft
auf Anfrage
Home-Office
bevorzugt
Profilaufrufe
222
Alter
31
Berufserfahrung
6 Jahre und 2 Monate (seit 01/2020)
Projektleitung
5 Jahre

Kontaktdaten

Nur registrierte PREMIUM-Mitglieder von freelance.de können Kontaktdaten einsehen.

Jetzt Mitglied werden