Archiviertes Projekt - Kafka Engineer 100% (m/f/d)
Firmenname für PREMIUM-Mitglieder sichtbar
Projektbeschreibung
Ihre Aufgaben:
• Implementation of next generation data and analytics systems
• Develop a data transfer backbone to support event processing and streaming, which in turn supports multiple projects that aim to modernise data management infrastructure, including: Building a new modern data warehouse and data lake architecture, offering innovative data lab environments for self-service analytics, enhancing capabilities to support machine learning and AI use cases
• Design, implementation and maintenance of an enterprise installation of Kafka
• Participating in the creation and execution of data governance, including message design, schema validation, and versioning
• Working with developers from other business unit technical teams to assist them in implementing functional solutions using Kafka
• Supporting the establishment of a platform SLA, including defining non-functional requirements
• Working with networks, data center, and infrastructure teams to optimize hardware solutions for the installation of Kafka
Ihre Qualifikationen:
• Graduate degree in computer science, mathematics, engineering, or another IT-related technical field, with experience in software development or data engineering. Above all, you have a passion for data and analytics
• Profound experience in designing, implementing and maintaining Apache Kafka as part for Cloudera Data Platform distribution
• Experience with data pipeline and workflow management tools: Airflow, RunDeck,Nifi etc.
• Experience with stream-processing systems: Kafka, Spark-Streaming, etc.
• Experience in designing and developing high-volume mission critical transactional data integration solutions
• Knowledge of message queuing, stream processing and highly scalable ‘big data’ data stores
• Experience building processes supporting data transformation, data structures, metadata, dependency and workload management
• Proven experience of designing middleware message governance, topic subscription and management, including schema validation and version compatibility
• Agile methodologies like Scrum
• Knowledge of service-oriented architecture and experience in API creation and management technologies (REST, SOAP etc.)
• Exposure to data modelling and business intelligence systems (dimensional modelling, data mining, predictive analytics
Ihre Vorteile:
• A highly motivated team and an open way of communication
Projektdauer: 6 MM+
Sie sind Freiberufler? Dann freuen wir uns auf Ihre Bewerbung!
• Implementation of next generation data and analytics systems
• Develop a data transfer backbone to support event processing and streaming, which in turn supports multiple projects that aim to modernise data management infrastructure, including: Building a new modern data warehouse and data lake architecture, offering innovative data lab environments for self-service analytics, enhancing capabilities to support machine learning and AI use cases
• Design, implementation and maintenance of an enterprise installation of Kafka
• Participating in the creation and execution of data governance, including message design, schema validation, and versioning
• Working with developers from other business unit technical teams to assist them in implementing functional solutions using Kafka
• Supporting the establishment of a platform SLA, including defining non-functional requirements
• Working with networks, data center, and infrastructure teams to optimize hardware solutions for the installation of Kafka
Ihre Qualifikationen:
• Graduate degree in computer science, mathematics, engineering, or another IT-related technical field, with experience in software development or data engineering. Above all, you have a passion for data and analytics
• Profound experience in designing, implementing and maintaining Apache Kafka as part for Cloudera Data Platform distribution
• Experience with data pipeline and workflow management tools: Airflow, RunDeck,Nifi etc.
• Experience with stream-processing systems: Kafka, Spark-Streaming, etc.
• Experience in designing and developing high-volume mission critical transactional data integration solutions
• Knowledge of message queuing, stream processing and highly scalable ‘big data’ data stores
• Experience building processes supporting data transformation, data structures, metadata, dependency and workload management
• Proven experience of designing middleware message governance, topic subscription and management, including schema validation and version compatibility
• Agile methodologies like Scrum
• Knowledge of service-oriented architecture and experience in API creation and management technologies (REST, SOAP etc.)
• Exposure to data modelling and business intelligence systems (dimensional modelling, data mining, predictive analytics
Ihre Vorteile:
• A highly motivated team and an open way of communication
Projektdauer: 6 MM+
Sie sind Freiberufler? Dann freuen wir uns auf Ihre Bewerbung!
Kontaktdaten
Als registriertes Mitglied von freelance.de können Sie sich direkt auf dieses Projekt bewerben.
Kategorien und Skills
IT, Entwicklung:
Management, Unternehmen, Strategie:
Marketing, Vertrieb, Kommunikation:
Forschung, Wissenschaft, Bildung:
Sie suchen Freelancer?
Schreiben Sie Ihr Projekt aus und erhalten Sie noch heute passende Angebote.
Jetzt Projekt erstellen »