<p><strong>Location:</strong>On-Site (Hong Kong / Macau)<br><strong>Employment Type:</strong>Contract<br><strong>Department:</strong>IS&T Operation and Support – Database Operation</p><p><strong>Job Summary:</strong></p><p>We are seeking a skilledData Integration Specialistto implement, and maintain robust data integration solutions across multiple databases and systems. The ideal candidate will have hands-on experience withApache Kafka,ETL/ELT pipelines,monitoring tools (Prometheus, Grafana, ElasticSearch), and supportingBI tools(e.g., Power BI, Tableau). Basic understanding onPython scriptingand familiarity withJupyter Notebooksfor data analysis will be a plus.</p><p>This role may requireon-call supportfor critical data integration systems, ensuring high availability, quick incident response, and minimal downtime. You will ensure seamless data flow, optimize performance, and enable real-time analytics to support business decision-making.</p><p><strong>Key Responsibilities:</strong></p><p>· Design & implement data integration solutionsacross heterogeneous databases (SQL, NoSQL, cloud data warehouses).</p><p>· Develop and maintainreal-time data pipelinesusingApache Kafka(Kafka Connect, Kafka Streams).</p><p>· OptimizeETL/ELT workflowsfor performance, scalability, and reliability.</p><p>· Monitor data pipelines and infrastructure usingPrometheus, Grafana, and ElasticSearch.</p><p>· Provide supportwith a structured on-call rotation for critical data integration systems, including incident response and troubleshooting, to resume the 24/7 business if required.</p><p>· Collaborate with BI teams to ensuredata availability and accuracyfor reporting and analytics.</p><p>· Troubleshoot and resolve data integration issues (latency, schema mismatches)in real-time.</p><p>· Work with cloud platforms (Azure, AliCloud) for data storage and processing.</p><p>· Implementdata governance and securitybest practices.</p><p>· Document data flows, integration processes, and system architectures.</p><p>· UtilizePython scriptingandJupyter Notebooksfor ad-hoc data analysis and automation.</p><p><strong>Required Skills & Qualifications:</strong></p><p>· 3+ yearsin data integration, ETL development, or data engineering.</p><p>· Strong experience withApache Kafka(setup, configuration, producers/consumers).</p><p>· Proficiency inSQLandNoSQL databases(PostgreSQL, MySQL, MongoDB, MSSQL, Clickhouse, etc.).</p><p>· Hands-on experience withETL tools(Airflow, SSIS) or custom scripting (Python).</p><p>· Experience withmonitoring & observability tools(Prometheus, Grafana, ElasticSearch).</p><p>· Ability to provide support for critical production systems.</p><p>· Knowledge ofBI tools(Power BI, Tableau, Looker) and data modeling for analytics.</p><p>· Familiarity withcloud data services(Azure Data Factory, Pipeline, OneLake).</p><p>· Understanding ofdata warehousing concepts(star schema, dimensional modeling).</p><p>· Strong problem-solving and debugging skills under pressure.</p><p><strong>Preferred Qualifications:</strong></p><p>· Experience withJupyter Notebooksfor exploratory data analysis.</p><p>· Knowledge ofCDC (Change Data Capture)tools (e.g. Debezium, Attunity Replicate).</p><p>· Familiarity withstream processing frameworks(e.g. Flink, Spark Streaming).</p><p>· Certifications in Kafka, cloud platforms, or data engineering are preferable.</p>